You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by GitBox <gi...@apache.org> on 2020/08/13 04:33:56 UTC

[GitHub] [hadoop] JohnZZGithub opened a new pull request #2223: YARN-10398. Fix the bug to make sure only application master upload resource to Yarn Shared Cache

JohnZZGithub opened a new pull request #2223:
URL: https://github.com/apache/hadoop/pull/2223


   …esource to yarn shared cache manager.
   
   ## NOTICE
   
   Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HADOOP-XXXXX. Fix a typo in YYY.)
   For more details, please see https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] liuml07 merged pull request #2223: MAPREDUCE-7294. Fix the bug to make sure only application master upload resource to Yarn Shared Cache

Posted by GitBox <gi...@apache.org>.
liuml07 merged pull request #2223:
URL: https://github.com/apache/hadoop/pull/2223


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] JohnZZGithub commented on pull request #2223: YARN-10398. Fix the bug to make sure only application master upload resource to Yarn Shared Cache

Posted by GitBox <gi...@apache.org>.
JohnZZGithub commented on pull request #2223:
URL: https://github.com/apache/hadoop/pull/2223#issuecomment-678902213


   @steveloughran  gotcha, thanks anyway.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] JohnZZGithub commented on pull request #2223: YARN-10398. Fix the bug to make sure only application master upload resource to Yarn Shared Cache

Posted by GitBox <gi...@apache.org>.
JohnZZGithub commented on pull request #2223:
URL: https://github.com/apache/hadoop/pull/2223#issuecomment-678902134


   @jiwq  I double checked and confirmed the PR is the fix for the problem. The reason why the non-application master tries to upload is that the clear cache code didn't work. The code and bug are in YARN. MR uses yarn shared cache. I'm not sure we should move it MR project.  Thanks.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] JohnZZGithub commented on a change in pull request #2223: MAPREDUCE-7294. Fix the bug to make sure only application master upload resource to Yarn Shared Cache

Posted by GitBox <gi...@apache.org>.
JohnZZGithub commented on a change in pull request #2223:
URL: https://github.com/apache/hadoop/pull/2223#discussion_r487341619



##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {

Review comment:
       The bug is when the policies are empty it won't clean up the existing policies.

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {

Review comment:
       The bug is when the policies are empty it won't clean up the existing policies.

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {

Review comment:
       The bug is when the policies are empty it won't clean up the existing policies.

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {

Review comment:
       The bug is when the policies are empty it won't clean up the existing policies.

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {

Review comment:
       The bug is when the policies are empty it won't clean up the existing policies.

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {

Review comment:
       The bug is when the policies are empty it won't clean up the existing policies.

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {

Review comment:
       The bug is when the policies are empty it won't clean up the existing policies.

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {

Review comment:
       The bug is when the policies are empty it won't clean up the existing policies.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] liuml07 commented on a change in pull request #2223: MAPREDUCE-7294. Fix the bug to make sure only application master upload resource to Yarn Shared Cache

Posted by GitBox <gi...@apache.org>.
liuml07 commented on a change in pull request #2223:
URL: https://github.com/apache/hadoop/pull/2223#discussion_r487367770



##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.
+    // In other words, cleaning up existing policies. This is useful when we
+    // try to clean up shared cache upload policies for non-application
+    // master tasks. See YARN-10398 for details.
+    if (policies == null || policies.size() == 0) {
+      return "";
+    }
+    StringBuilder sb = new StringBuilder();

Review comment:
       I know following code was mostly borrowed from the existing code, but since we are in Java 8 for Hadoop 3, should we simplify this a bit using this chance?
   
   ```
       Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
       Map.Entry<String, Boolean> e;
       if (it.hasNext()) {
         e = it.next();
         sb.append(e.getKey() + DELIM + e.getValue());
       }
       while (it.hasNext()) {
         e = it.next();
         sb.append("," + e.getKey() + DELIM + e.getValue());
       }
   ```
   can be one statement (shorter and clearer)
   ```
      policies.forEach((k,v) -> sb.append(",").append(k).append(DELIM).append(v));
   ```
   
   Thoughts?

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.
+    // In other words, cleaning up existing policies. This is useful when we
+    // try to clean up shared cache upload policies for non-application
+    // master tasks. See YARN-10398 for details.

Review comment:
       Since this JIRA has been moved from YARN to MAPREDUCE project, should we replace the `YARN-10398` in comment with the new JIRA number `MAPREDUCE-7294`?

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.

Review comment:
       nit: this sentence can be:
   ```
       // If no policy is provided, we will reset the config by setting an empty string value.
   ```

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.
+    // In other words, cleaning up existing policies. This is useful when we
+    // try to clean up shared cache upload policies for non-application
+    // master tasks. See YARN-10398 for details.
+    if (policies == null || policies.size() == 0) {
+      return "";
+    }
+    StringBuilder sb = new StringBuilder();

Review comment:
       Also, given the code is so simple, maybe we can save one extra private helper method, and move the logic back to `setSharedCacheUploadPolicies()` method, which itself has only several lines of code.

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.
+    // In other words, cleaning up existing policies. This is useful when we
+    // try to clean up shared cache upload policies for non-application
+    // master tasks. See YARN-10398 for details.
+    if (policies == null || policies.size() == 0) {
+      return "";
+    }
+    StringBuilder sb = new StringBuilder();

Review comment:
       I know following code was mostly borrowed from the existing code, but since we are in Java 8 for Hadoop 3, should we simplify this a bit using this chance?
   
   ```
       Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
       Map.Entry<String, Boolean> e;
       if (it.hasNext()) {
         e = it.next();
         sb.append(e.getKey() + DELIM + e.getValue());
       }
       while (it.hasNext()) {
         e = it.next();
         sb.append("," + e.getKey() + DELIM + e.getValue());
       }
   ```
   can be shorter and clearer statement, for e.g.
   ```
       policies.forEach((k,v) -> sb.append(k).append(DELIM).append(v).append(","));
       sb.deleteCharAt(sb.length() - 1);
   ```
   
   Thoughts?

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.
+    // In other words, cleaning up existing policies. This is useful when we
+    // try to clean up shared cache upload policies for non-application
+    // master tasks. See YARN-10398 for details.
+    if (policies == null || policies.size() == 0) {
+      return "";
+    }
+    StringBuilder sb = new StringBuilder();

Review comment:
       I know following code was mostly borrowed from the existing code, but since we are in Java 8 for Hadoop 3, should we simplify this a bit using this chance?
   
   ```
       Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
       Map.Entry<String, Boolean> e;
       if (it.hasNext()) {
         e = it.next();
         sb.append(e.getKey() + DELIM + e.getValue());
       }
       while (it.hasNext()) {
         e = it.next();
         sb.append("," + e.getKey() + DELIM + e.getValue());
       }
   ```
   can be shorter and clearer statement, for e.g.
   ```
       policies.forEach((k,v) -> sb.append(k).append(DELIM).append(v).append(","));
       sb.deleteCharAt(sb.length() - 1); // do we need this, or it is just fine?
   ```
   
   Thoughts?

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.

Review comment:
       nit: this sentence can be:
   ```
   // If no policy is provided, we will reset the config by setting an empty string value.
   ```




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] JohnZZGithub commented on pull request #2223: MAPREDUCE-7294. Fix the bug to make sure only application master upload resource to Yarn Shared Cache

Posted by GitBox <gi...@apache.org>.
JohnZZGithub commented on pull request #2223:
URL: https://github.com/apache/hadoop/pull/2223#issuecomment-693162109


   @liuml07  Thanks a ton!


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] jiwq commented on pull request #2223: YARN-10398. Fix the bug to make sure only application master upload resource to Yarn Shared Cache

Posted by GitBox <gi...@apache.org>.
jiwq commented on pull request #2223:
URL: https://github.com/apache/hadoop/pull/2223#issuecomment-679207182


   > @jiwq I double checked and confirmed the PR is the fix for the problem. The reason why the non-application master tries to upload is that the clear cache code didn't work. The code and bug are in YARN. MR uses yarn shared cache. I'm not sure we should move it MR project. Thanks.
   
   @JohnZZGithub The YARN Shared Cache is used to all YARN applications, but this PR is related to the MapReduce, so I think we should move it to MAPREDUCE project.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] JohnZZGithub commented on pull request #2223: YARN-10398. Fix the bug to make sure only application master upload resource to Yarn Shared Cache

Posted by GitBox <gi...@apache.org>.
JohnZZGithub commented on pull request #2223:
URL: https://github.com/apache/hadoop/pull/2223#issuecomment-673249933


   https://issues.apache.org/jira/browse/YARN-10398 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] JohnZZGithub commented on pull request #2223: MAPREDUCE-7294. Fix the bug to make sure only application master upload resource to Yarn Shared Cache

Posted by GitBox <gi...@apache.org>.
JohnZZGithub commented on pull request #2223:
URL: https://github.com/apache/hadoop/pull/2223#issuecomment-692507296


   @liuml07  Thanks a lot for the detailed review, updated the PR. 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] JohnZZGithub commented on pull request #2223: MAPREDUCE-7294. Fix the bug to make sure only application master upload resource to Yarn Shared Cache

Posted by GitBox <gi...@apache.org>.
JohnZZGithub commented on pull request #2223:
URL: https://github.com/apache/hadoop/pull/2223#issuecomment-691360828






----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] steveloughran commented on pull request #2223: YARN-10398. Fix the bug to make sure only application master upload resource to Yarn Shared Cache

Posted by GitBox <gi...@apache.org>.
steveloughran commented on pull request #2223:
URL: https://github.com/apache/hadoop/pull/2223#issuecomment-678669239


   > @steveloughran Could you please help review the patch? Thanks
   
   I don't do yarn PRs


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] JohnZZGithub commented on pull request #2223: YARN-10398. Fix the bug to make sure only application master upload resource to Yarn Shared Cache

Posted by GitBox <gi...@apache.org>.
JohnZZGithub commented on pull request #2223:
URL: https://github.com/apache/hadoop/pull/2223#issuecomment-675689778


   https://issues.apache.org/jira/browse/YARN-10398


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] JohnZZGithub commented on pull request #2223: MAPREDUCE-7294. Fix the bug to make sure only application master upload resource to Yarn Shared Cache

Posted by GitBox <gi...@apache.org>.
JohnZZGithub commented on pull request #2223:
URL: https://github.com/apache/hadoop/pull/2223#issuecomment-691360828






----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] liuml07 commented on a change in pull request #2223: MAPREDUCE-7294. Fix the bug to make sure only application master upload resource to Yarn Shared Cache

Posted by GitBox <gi...@apache.org>.
liuml07 commented on a change in pull request #2223:
URL: https://github.com/apache/hadoop/pull/2223#discussion_r487367770



##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.
+    // In other words, cleaning up existing policies. This is useful when we
+    // try to clean up shared cache upload policies for non-application
+    // master tasks. See YARN-10398 for details.
+    if (policies == null || policies.size() == 0) {
+      return "";
+    }
+    StringBuilder sb = new StringBuilder();

Review comment:
       I know following code was mostly borrowed from the existing code, but since we are in Java 8 for Hadoop 3, should we simplify this a bit using this chance?
   
   ```
       Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
       Map.Entry<String, Boolean> e;
       if (it.hasNext()) {
         e = it.next();
         sb.append(e.getKey() + DELIM + e.getValue());
       }
       while (it.hasNext()) {
         e = it.next();
         sb.append("," + e.getKey() + DELIM + e.getValue());
       }
   ```
   can be one statement (shorter and clearer)
   ```
      policies.forEach((k,v) -> sb.append(",").append(k).append(DELIM).append(v));
   ```
   
   Thoughts?

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.
+    // In other words, cleaning up existing policies. This is useful when we
+    // try to clean up shared cache upload policies for non-application
+    // master tasks. See YARN-10398 for details.

Review comment:
       Since this JIRA has been moved from YARN to MAPREDUCE project, should we replace the `YARN-10398` in comment with the new JIRA number `MAPREDUCE-7294`?

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.

Review comment:
       nit: this sentence can be:
   ```
       // If no policy is provided, we will reset the config by setting an empty string value.
   ```

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.
+    // In other words, cleaning up existing policies. This is useful when we
+    // try to clean up shared cache upload policies for non-application
+    // master tasks. See YARN-10398 for details.
+    if (policies == null || policies.size() == 0) {
+      return "";
+    }
+    StringBuilder sb = new StringBuilder();

Review comment:
       Also, given the code is so simple, maybe we can save one extra private helper method, and move the logic back to `setSharedCacheUploadPolicies()` method, which itself has only several lines of code.

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.
+    // In other words, cleaning up existing policies. This is useful when we
+    // try to clean up shared cache upload policies for non-application
+    // master tasks. See YARN-10398 for details.
+    if (policies == null || policies.size() == 0) {
+      return "";
+    }
+    StringBuilder sb = new StringBuilder();

Review comment:
       I know following code was mostly borrowed from the existing code, but since we are in Java 8 for Hadoop 3, should we simplify this a bit using this chance?
   
   ```
       Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
       Map.Entry<String, Boolean> e;
       if (it.hasNext()) {
         e = it.next();
         sb.append(e.getKey() + DELIM + e.getValue());
       }
       while (it.hasNext()) {
         e = it.next();
         sb.append("," + e.getKey() + DELIM + e.getValue());
       }
   ```
   can be shorter and clearer statement, for e.g.
   ```
       policies.forEach((k,v) -> sb.append(k).append(DELIM).append(v).append(","));
       sb.deleteCharAt(sb.length() - 1);
   ```
   
   Thoughts?

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.
+    // In other words, cleaning up existing policies. This is useful when we
+    // try to clean up shared cache upload policies for non-application
+    // master tasks. See YARN-10398 for details.
+    if (policies == null || policies.size() == 0) {
+      return "";
+    }
+    StringBuilder sb = new StringBuilder();

Review comment:
       I know following code was mostly borrowed from the existing code, but since we are in Java 8 for Hadoop 3, should we simplify this a bit using this chance?
   
   ```
       Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
       Map.Entry<String, Boolean> e;
       if (it.hasNext()) {
         e = it.next();
         sb.append(e.getKey() + DELIM + e.getValue());
       }
       while (it.hasNext()) {
         e = it.next();
         sb.append("," + e.getKey() + DELIM + e.getValue());
       }
   ```
   can be shorter and clearer statement, for e.g.
   ```
       policies.forEach((k,v) -> sb.append(k).append(DELIM).append(v).append(","));
       sb.deleteCharAt(sb.length() - 1); // do we need this, or it is just fine?
   ```
   
   Thoughts?

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.

Review comment:
       nit: this sentence can be:
   ```
   // If no policy is provided, we will reset the config by setting an empty string value.
   ```

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.
+    // In other words, cleaning up existing policies. This is useful when we
+    // try to clean up shared cache upload policies for non-application
+    // master tasks. See YARN-10398 for details.
+    if (policies == null || policies.size() == 0) {
+      return "";
+    }
+    StringBuilder sb = new StringBuilder();

Review comment:
       I know following code was mostly borrowed from the existing code, but since we are in Java 8 for Hadoop 3, should we simplify this a bit using this chance?
   
   ```
       Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
       Map.Entry<String, Boolean> e;
       if (it.hasNext()) {
         e = it.next();
         sb.append(e.getKey() + DELIM + e.getValue());
       }
       while (it.hasNext()) {
         e = it.next();
         sb.append("," + e.getKey() + DELIM + e.getValue());
       }
   ```
   can be one statement (shorter and clearer)
   ```
      policies.forEach((k,v) -> sb.append(",").append(k).append(DELIM).append(v));
   ```
   
   Thoughts?

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.
+    // In other words, cleaning up existing policies. This is useful when we
+    // try to clean up shared cache upload policies for non-application
+    // master tasks. See YARN-10398 for details.

Review comment:
       Since this JIRA has been moved from YARN to MAPREDUCE project, should we replace the `YARN-10398` in comment with the new JIRA number `MAPREDUCE-7294`?

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.

Review comment:
       nit: this sentence can be:
   ```
       // If no policy is provided, we will reset the config by setting an empty string value.
   ```

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.
+    // In other words, cleaning up existing policies. This is useful when we
+    // try to clean up shared cache upload policies for non-application
+    // master tasks. See YARN-10398 for details.
+    if (policies == null || policies.size() == 0) {
+      return "";
+    }
+    StringBuilder sb = new StringBuilder();

Review comment:
       Also, given the code is so simple, maybe we can save one extra private helper method, and move the logic back to `setSharedCacheUploadPolicies()` method, which itself has only several lines of code.

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.
+    // In other words, cleaning up existing policies. This is useful when we
+    // try to clean up shared cache upload policies for non-application
+    // master tasks. See YARN-10398 for details.
+    if (policies == null || policies.size() == 0) {
+      return "";
+    }
+    StringBuilder sb = new StringBuilder();

Review comment:
       I know following code was mostly borrowed from the existing code, but since we are in Java 8 for Hadoop 3, should we simplify this a bit using this chance?
   
   ```
       Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
       Map.Entry<String, Boolean> e;
       if (it.hasNext()) {
         e = it.next();
         sb.append(e.getKey() + DELIM + e.getValue());
       }
       while (it.hasNext()) {
         e = it.next();
         sb.append("," + e.getKey() + DELIM + e.getValue());
       }
   ```
   can be shorter and clearer statement, for e.g.
   ```
       policies.forEach((k,v) -> sb.append(k).append(DELIM).append(v).append(","));
       sb.deleteCharAt(sb.length() - 1);
   ```
   
   Thoughts?

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.
+    // In other words, cleaning up existing policies. This is useful when we
+    // try to clean up shared cache upload policies for non-application
+    // master tasks. See YARN-10398 for details.
+    if (policies == null || policies.size() == 0) {
+      return "";
+    }
+    StringBuilder sb = new StringBuilder();

Review comment:
       I know following code was mostly borrowed from the existing code, but since we are in Java 8 for Hadoop 3, should we simplify this a bit using this chance?
   
   ```
       Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
       Map.Entry<String, Boolean> e;
       if (it.hasNext()) {
         e = it.next();
         sb.append(e.getKey() + DELIM + e.getValue());
       }
       while (it.hasNext()) {
         e = it.next();
         sb.append("," + e.getKey() + DELIM + e.getValue());
       }
   ```
   can be shorter and clearer statement, for e.g.
   ```
       policies.forEach((k,v) -> sb.append(k).append(DELIM).append(v).append(","));
       sb.deleteCharAt(sb.length() - 1); // do we need this, or it is just fine?
   ```
   
   Thoughts?

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.

Review comment:
       nit: this sentence can be:
   ```
   // If no policy is provided, we will reset the config by setting an empty string value.
   ```

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.
+    // In other words, cleaning up existing policies. This is useful when we
+    // try to clean up shared cache upload policies for non-application
+    // master tasks. See YARN-10398 for details.
+    if (policies == null || policies.size() == 0) {
+      return "";
+    }
+    StringBuilder sb = new StringBuilder();

Review comment:
       I know following code was mostly borrowed from the existing code, but since we are in Java 8 for Hadoop 3, should we simplify this a bit using this chance?
   
   ```
       Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
       Map.Entry<String, Boolean> e;
       if (it.hasNext()) {
         e = it.next();
         sb.append(e.getKey() + DELIM + e.getValue());
       }
       while (it.hasNext()) {
         e = it.next();
         sb.append("," + e.getKey() + DELIM + e.getValue());
       }
   ```
   can be one statement (shorter and clearer)
   ```
      policies.forEach((k,v) -> sb.append(",").append(k).append(DELIM).append(v));
   ```
   
   Thoughts?

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.
+    // In other words, cleaning up existing policies. This is useful when we
+    // try to clean up shared cache upload policies for non-application
+    // master tasks. See YARN-10398 for details.

Review comment:
       Since this JIRA has been moved from YARN to MAPREDUCE project, should we replace the `YARN-10398` in comment with the new JIRA number `MAPREDUCE-7294`?

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.

Review comment:
       nit: this sentence can be:
   ```
       // If no policy is provided, we will reset the config by setting an empty string value.
   ```

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.
+    // In other words, cleaning up existing policies. This is useful when we
+    // try to clean up shared cache upload policies for non-application
+    // master tasks. See YARN-10398 for details.
+    if (policies == null || policies.size() == 0) {
+      return "";
+    }
+    StringBuilder sb = new StringBuilder();

Review comment:
       Also, given the code is so simple, maybe we can save one extra private helper method, and move the logic back to `setSharedCacheUploadPolicies()` method, which itself has only several lines of code.

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.
+    // In other words, cleaning up existing policies. This is useful when we
+    // try to clean up shared cache upload policies for non-application
+    // master tasks. See YARN-10398 for details.
+    if (policies == null || policies.size() == 0) {
+      return "";
+    }
+    StringBuilder sb = new StringBuilder();

Review comment:
       I know following code was mostly borrowed from the existing code, but since we are in Java 8 for Hadoop 3, should we simplify this a bit using this chance?
   
   ```
       Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
       Map.Entry<String, Boolean> e;
       if (it.hasNext()) {
         e = it.next();
         sb.append(e.getKey() + DELIM + e.getValue());
       }
       while (it.hasNext()) {
         e = it.next();
         sb.append("," + e.getKey() + DELIM + e.getValue());
       }
   ```
   can be shorter and clearer statement, for e.g.
   ```
       policies.forEach((k,v) -> sb.append(k).append(DELIM).append(v).append(","));
       sb.deleteCharAt(sb.length() - 1);
   ```
   
   Thoughts?

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.
+    // In other words, cleaning up existing policies. This is useful when we
+    // try to clean up shared cache upload policies for non-application
+    // master tasks. See YARN-10398 for details.
+    if (policies == null || policies.size() == 0) {
+      return "";
+    }
+    StringBuilder sb = new StringBuilder();

Review comment:
       I know following code was mostly borrowed from the existing code, but since we are in Java 8 for Hadoop 3, should we simplify this a bit using this chance?
   
   ```
       Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
       Map.Entry<String, Boolean> e;
       if (it.hasNext()) {
         e = it.next();
         sb.append(e.getKey() + DELIM + e.getValue());
       }
       while (it.hasNext()) {
         e = it.next();
         sb.append("," + e.getKey() + DELIM + e.getValue());
       }
   ```
   can be shorter and clearer statement, for e.g.
   ```
       policies.forEach((k,v) -> sb.append(k).append(DELIM).append(v).append(","));
       sb.deleteCharAt(sb.length() - 1); // do we need this, or it is just fine?
   ```
   
   Thoughts?

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.

Review comment:
       nit: this sentence can be:
   ```
   // If no policy is provided, we will reset the config by setting an empty string value.
   ```




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] liuml07 commented on a change in pull request #2223: MAPREDUCE-7294. Fix the bug to make sure only application master upload resource to Yarn Shared Cache

Posted by GitBox <gi...@apache.org>.
liuml07 commented on a change in pull request #2223:
URL: https://github.com/apache/hadoop/pull/2223#discussion_r487367770



##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.
+    // In other words, cleaning up existing policies. This is useful when we
+    // try to clean up shared cache upload policies for non-application
+    // master tasks. See YARN-10398 for details.
+    if (policies == null || policies.size() == 0) {
+      return "";
+    }
+    StringBuilder sb = new StringBuilder();

Review comment:
       I know following code was mostly borrowed from the existing code, but since we are in Java 8 for Hadoop 3, should we simplify this a bit using this chance?
   
   ```
       Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
       Map.Entry<String, Boolean> e;
       if (it.hasNext()) {
         e = it.next();
         sb.append(e.getKey() + DELIM + e.getValue());
       }
       while (it.hasNext()) {
         e = it.next();
         sb.append("," + e.getKey() + DELIM + e.getValue());
       }
   ```
   can be one statement (shorter and clearer)
   ```
      policies.forEach((k,v) -> sb.append(",").append(k).append(DELIM).append(v));
   ```
   
   Thoughts?

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.
+    // In other words, cleaning up existing policies. This is useful when we
+    // try to clean up shared cache upload policies for non-application
+    // master tasks. See YARN-10398 for details.

Review comment:
       Since this JIRA has been moved from YARN to MAPREDUCE project, should we replace the `YARN-10398` in comment with the new JIRA number `MAPREDUCE-7294`?

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.

Review comment:
       nit: this sentence can be:
   ```
       // If no policy is provided, we will reset the config by setting an empty string value.
   ```

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.
+    // In other words, cleaning up existing policies. This is useful when we
+    // try to clean up shared cache upload policies for non-application
+    // master tasks. See YARN-10398 for details.
+    if (policies == null || policies.size() == 0) {
+      return "";
+    }
+    StringBuilder sb = new StringBuilder();

Review comment:
       Also, given the code is so simple, maybe we can save one extra private helper method, and move the logic back to `setSharedCacheUploadPolicies()` method, which itself has only several lines of code.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] JohnZZGithub commented on a change in pull request #2223: MAPREDUCE-7294. Fix the bug to make sure only application master upload resource to Yarn Shared Cache

Posted by GitBox <gi...@apache.org>.
JohnZZGithub commented on a change in pull request #2223:
URL: https://github.com/apache/hadoop/pull/2223#discussion_r487341619



##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {

Review comment:
       The bug is when the policies are empty it won't clean up the existing policies.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] liuml07 commented on a change in pull request #2223: MAPREDUCE-7294. Fix the bug to make sure only application master upload resource to Yarn Shared Cache

Posted by GitBox <gi...@apache.org>.
liuml07 commented on a change in pull request #2223:
URL: https://github.com/apache/hadoop/pull/2223#discussion_r487367770



##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.
+    // In other words, cleaning up existing policies. This is useful when we
+    // try to clean up shared cache upload policies for non-application
+    // master tasks. See YARN-10398 for details.
+    if (policies == null || policies.size() == 0) {
+      return "";
+    }
+    StringBuilder sb = new StringBuilder();

Review comment:
       I know following code was mostly borrowed from the existing code, but since we are in Java 8 for Hadoop 3, should we simplify this a bit using this chance?
   
   ```
       Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
       Map.Entry<String, Boolean> e;
       if (it.hasNext()) {
         e = it.next();
         sb.append(e.getKey() + DELIM + e.getValue());
       }
       while (it.hasNext()) {
         e = it.next();
         sb.append("," + e.getKey() + DELIM + e.getValue());
       }
   ```
   can be one statement (shorter and clearer)
   ```
      policies.forEach((k,v) -> sb.append(",").append(k).append(DELIM).append(v));
   ```
   
   Thoughts?

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.
+    // In other words, cleaning up existing policies. This is useful when we
+    // try to clean up shared cache upload policies for non-application
+    // master tasks. See YARN-10398 for details.

Review comment:
       Since this JIRA has been moved from YARN to MAPREDUCE project, should we replace the `YARN-10398` in comment with the new JIRA number `MAPREDUCE-7294`?

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.

Review comment:
       nit: this sentence can be:
   ```
       // If no policy is provided, we will reset the config by setting an empty string value.
   ```

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.
+    // In other words, cleaning up existing policies. This is useful when we
+    // try to clean up shared cache upload policies for non-application
+    // master tasks. See YARN-10398 for details.
+    if (policies == null || policies.size() == 0) {
+      return "";
+    }
+    StringBuilder sb = new StringBuilder();

Review comment:
       Also, given the code is so simple, maybe we can save one extra private helper method, and move the logic back to `setSharedCacheUploadPolicies()` method, which itself has only several lines of code.

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.
+    // In other words, cleaning up existing policies. This is useful when we
+    // try to clean up shared cache upload policies for non-application
+    // master tasks. See YARN-10398 for details.
+    if (policies == null || policies.size() == 0) {
+      return "";
+    }
+    StringBuilder sb = new StringBuilder();

Review comment:
       I know following code was mostly borrowed from the existing code, but since we are in Java 8 for Hadoop 3, should we simplify this a bit using this chance?
   
   ```
       Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
       Map.Entry<String, Boolean> e;
       if (it.hasNext()) {
         e = it.next();
         sb.append(e.getKey() + DELIM + e.getValue());
       }
       while (it.hasNext()) {
         e = it.next();
         sb.append("," + e.getKey() + DELIM + e.getValue());
       }
   ```
   can be shorter and clearer statement, for e.g.
   ```
       policies.forEach((k,v) -> sb.append(k).append(DELIM).append(v).append(","));
       sb.deleteCharAt(sb.length() - 1);
   ```
   
   Thoughts?

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.
+    // In other words, cleaning up existing policies. This is useful when we
+    // try to clean up shared cache upload policies for non-application
+    // master tasks. See YARN-10398 for details.
+    if (policies == null || policies.size() == 0) {
+      return "";
+    }
+    StringBuilder sb = new StringBuilder();

Review comment:
       I know following code was mostly borrowed from the existing code, but since we are in Java 8 for Hadoop 3, should we simplify this a bit using this chance?
   
   ```
       Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
       Map.Entry<String, Boolean> e;
       if (it.hasNext()) {
         e = it.next();
         sb.append(e.getKey() + DELIM + e.getValue());
       }
       while (it.hasNext()) {
         e = it.next();
         sb.append("," + e.getKey() + DELIM + e.getValue());
       }
   ```
   can be shorter and clearer statement, for e.g.
   ```
       policies.forEach((k,v) -> sb.append(k).append(DELIM).append(v).append(","));
       sb.deleteCharAt(sb.length() - 1); // do we need this, or it is just fine?
   ```
   
   Thoughts?

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.

Review comment:
       nit: this sentence can be:
   ```
   // If no policy is provided, we will reset the config by setting an empty string value.
   ```

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.
+    // In other words, cleaning up existing policies. This is useful when we
+    // try to clean up shared cache upload policies for non-application
+    // master tasks. See YARN-10398 for details.
+    if (policies == null || policies.size() == 0) {
+      return "";
+    }
+    StringBuilder sb = new StringBuilder();

Review comment:
       I know following code was mostly borrowed from the existing code, but since we are in Java 8 for Hadoop 3, should we simplify this a bit using this chance?
   
   ```
       Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
       Map.Entry<String, Boolean> e;
       if (it.hasNext()) {
         e = it.next();
         sb.append(e.getKey() + DELIM + e.getValue());
       }
       while (it.hasNext()) {
         e = it.next();
         sb.append("," + e.getKey() + DELIM + e.getValue());
       }
   ```
   can be one statement (shorter and clearer)
   ```
      policies.forEach((k,v) -> sb.append(",").append(k).append(DELIM).append(v));
   ```
   
   Thoughts?

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.
+    // In other words, cleaning up existing policies. This is useful when we
+    // try to clean up shared cache upload policies for non-application
+    // master tasks. See YARN-10398 for details.

Review comment:
       Since this JIRA has been moved from YARN to MAPREDUCE project, should we replace the `YARN-10398` in comment with the new JIRA number `MAPREDUCE-7294`?

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.

Review comment:
       nit: this sentence can be:
   ```
       // If no policy is provided, we will reset the config by setting an empty string value.
   ```

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.
+    // In other words, cleaning up existing policies. This is useful when we
+    // try to clean up shared cache upload policies for non-application
+    // master tasks. See YARN-10398 for details.
+    if (policies == null || policies.size() == 0) {
+      return "";
+    }
+    StringBuilder sb = new StringBuilder();

Review comment:
       Also, given the code is so simple, maybe we can save one extra private helper method, and move the logic back to `setSharedCacheUploadPolicies()` method, which itself has only several lines of code.

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.
+    // In other words, cleaning up existing policies. This is useful when we
+    // try to clean up shared cache upload policies for non-application
+    // master tasks. See YARN-10398 for details.
+    if (policies == null || policies.size() == 0) {
+      return "";
+    }
+    StringBuilder sb = new StringBuilder();

Review comment:
       I know following code was mostly borrowed from the existing code, but since we are in Java 8 for Hadoop 3, should we simplify this a bit using this chance?
   
   ```
       Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
       Map.Entry<String, Boolean> e;
       if (it.hasNext()) {
         e = it.next();
         sb.append(e.getKey() + DELIM + e.getValue());
       }
       while (it.hasNext()) {
         e = it.next();
         sb.append("," + e.getKey() + DELIM + e.getValue());
       }
   ```
   can be shorter and clearer statement, for e.g.
   ```
       policies.forEach((k,v) -> sb.append(k).append(DELIM).append(v).append(","));
       sb.deleteCharAt(sb.length() - 1);
   ```
   
   Thoughts?

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.
+    // In other words, cleaning up existing policies. This is useful when we
+    // try to clean up shared cache upload policies for non-application
+    // master tasks. See YARN-10398 for details.
+    if (policies == null || policies.size() == 0) {
+      return "";
+    }
+    StringBuilder sb = new StringBuilder();

Review comment:
       I know following code was mostly borrowed from the existing code, but since we are in Java 8 for Hadoop 3, should we simplify this a bit using this chance?
   
   ```
       Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
       Map.Entry<String, Boolean> e;
       if (it.hasNext()) {
         e = it.next();
         sb.append(e.getKey() + DELIM + e.getValue());
       }
       while (it.hasNext()) {
         e = it.next();
         sb.append("," + e.getKey() + DELIM + e.getValue());
       }
   ```
   can be shorter and clearer statement, for e.g.
   ```
       policies.forEach((k,v) -> sb.append(k).append(DELIM).append(v).append(","));
       sb.deleteCharAt(sb.length() - 1); // do we need this, or it is just fine?
   ```
   
   Thoughts?

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.

Review comment:
       nit: this sentence can be:
   ```
   // If no policy is provided, we will reset the config by setting an empty string value.
   ```

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.
+    // In other words, cleaning up existing policies. This is useful when we
+    // try to clean up shared cache upload policies for non-application
+    // master tasks. See YARN-10398 for details.
+    if (policies == null || policies.size() == 0) {
+      return "";
+    }
+    StringBuilder sb = new StringBuilder();

Review comment:
       I know following code was mostly borrowed from the existing code, but since we are in Java 8 for Hadoop 3, should we simplify this a bit using this chance?
   
   ```
       Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
       Map.Entry<String, Boolean> e;
       if (it.hasNext()) {
         e = it.next();
         sb.append(e.getKey() + DELIM + e.getValue());
       }
       while (it.hasNext()) {
         e = it.next();
         sb.append("," + e.getKey() + DELIM + e.getValue());
       }
   ```
   can be one statement (shorter and clearer)
   ```
      policies.forEach((k,v) -> sb.append(",").append(k).append(DELIM).append(v));
   ```
   
   Thoughts?

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.
+    // In other words, cleaning up existing policies. This is useful when we
+    // try to clean up shared cache upload policies for non-application
+    // master tasks. See YARN-10398 for details.

Review comment:
       Since this JIRA has been moved from YARN to MAPREDUCE project, should we replace the `YARN-10398` in comment with the new JIRA number `MAPREDUCE-7294`?

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.

Review comment:
       nit: this sentence can be:
   ```
       // If no policy is provided, we will reset the config by setting an empty string value.
   ```

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.
+    // In other words, cleaning up existing policies. This is useful when we
+    // try to clean up shared cache upload policies for non-application
+    // master tasks. See YARN-10398 for details.
+    if (policies == null || policies.size() == 0) {
+      return "";
+    }
+    StringBuilder sb = new StringBuilder();

Review comment:
       Also, given the code is so simple, maybe we can save one extra private helper method, and move the logic back to `setSharedCacheUploadPolicies()` method, which itself has only several lines of code.

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.
+    // In other words, cleaning up existing policies. This is useful when we
+    // try to clean up shared cache upload policies for non-application
+    // master tasks. See YARN-10398 for details.
+    if (policies == null || policies.size() == 0) {
+      return "";
+    }
+    StringBuilder sb = new StringBuilder();

Review comment:
       I know following code was mostly borrowed from the existing code, but since we are in Java 8 for Hadoop 3, should we simplify this a bit using this chance?
   
   ```
       Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
       Map.Entry<String, Boolean> e;
       if (it.hasNext()) {
         e = it.next();
         sb.append(e.getKey() + DELIM + e.getValue());
       }
       while (it.hasNext()) {
         e = it.next();
         sb.append("," + e.getKey() + DELIM + e.getValue());
       }
   ```
   can be shorter and clearer statement, for e.g.
   ```
       policies.forEach((k,v) -> sb.append(k).append(DELIM).append(v).append(","));
       sb.deleteCharAt(sb.length() - 1);
   ```
   
   Thoughts?

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.
+    // In other words, cleaning up existing policies. This is useful when we
+    // try to clean up shared cache upload policies for non-application
+    // master tasks. See YARN-10398 for details.
+    if (policies == null || policies.size() == 0) {
+      return "";
+    }
+    StringBuilder sb = new StringBuilder();

Review comment:
       I know following code was mostly borrowed from the existing code, but since we are in Java 8 for Hadoop 3, should we simplify this a bit using this chance?
   
   ```
       Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
       Map.Entry<String, Boolean> e;
       if (it.hasNext()) {
         e = it.next();
         sb.append(e.getKey() + DELIM + e.getValue());
       }
       while (it.hasNext()) {
         e = it.next();
         sb.append("," + e.getKey() + DELIM + e.getValue());
       }
   ```
   can be shorter and clearer statement, for e.g.
   ```
       policies.forEach((k,v) -> sb.append(k).append(DELIM).append(v).append(","));
       sb.deleteCharAt(sb.length() - 1); // do we need this, or it is just fine?
   ```
   
   Thoughts?

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.

Review comment:
       nit: this sentence can be:
   ```
   // If no policy is provided, we will reset the config by setting an empty string value.
   ```

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.
+    // In other words, cleaning up existing policies. This is useful when we
+    // try to clean up shared cache upload policies for non-application
+    // master tasks. See YARN-10398 for details.
+    if (policies == null || policies.size() == 0) {
+      return "";
+    }
+    StringBuilder sb = new StringBuilder();

Review comment:
       I know following code was mostly borrowed from the existing code, but since we are in Java 8 for Hadoop 3, should we simplify this a bit using this chance?
   
   ```
       Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
       Map.Entry<String, Boolean> e;
       if (it.hasNext()) {
         e = it.next();
         sb.append(e.getKey() + DELIM + e.getValue());
       }
       while (it.hasNext()) {
         e = it.next();
         sb.append("," + e.getKey() + DELIM + e.getValue());
       }
   ```
   can be one statement (shorter and clearer)
   ```
      policies.forEach((k,v) -> sb.append(",").append(k).append(DELIM).append(v));
   ```
   
   Thoughts?

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.
+    // In other words, cleaning up existing policies. This is useful when we
+    // try to clean up shared cache upload policies for non-application
+    // master tasks. See YARN-10398 for details.

Review comment:
       Since this JIRA has been moved from YARN to MAPREDUCE project, should we replace the `YARN-10398` in comment with the new JIRA number `MAPREDUCE-7294`?

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.

Review comment:
       nit: this sentence can be:
   ```
       // If no policy is provided, we will reset the config by setting an empty string value.
   ```

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.
+    // In other words, cleaning up existing policies. This is useful when we
+    // try to clean up shared cache upload policies for non-application
+    // master tasks. See YARN-10398 for details.
+    if (policies == null || policies.size() == 0) {
+      return "";
+    }
+    StringBuilder sb = new StringBuilder();

Review comment:
       Also, given the code is so simple, maybe we can save one extra private helper method, and move the logic back to `setSharedCacheUploadPolicies()` method, which itself has only several lines of code.

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.
+    // In other words, cleaning up existing policies. This is useful when we
+    // try to clean up shared cache upload policies for non-application
+    // master tasks. See YARN-10398 for details.
+    if (policies == null || policies.size() == 0) {
+      return "";
+    }
+    StringBuilder sb = new StringBuilder();

Review comment:
       I know following code was mostly borrowed from the existing code, but since we are in Java 8 for Hadoop 3, should we simplify this a bit using this chance?
   
   ```
       Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
       Map.Entry<String, Boolean> e;
       if (it.hasNext()) {
         e = it.next();
         sb.append(e.getKey() + DELIM + e.getValue());
       }
       while (it.hasNext()) {
         e = it.next();
         sb.append("," + e.getKey() + DELIM + e.getValue());
       }
   ```
   can be shorter and clearer statement, for e.g.
   ```
       policies.forEach((k,v) -> sb.append(k).append(DELIM).append(v).append(","));
       sb.deleteCharAt(sb.length() - 1);
   ```
   
   Thoughts?

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.
+    // In other words, cleaning up existing policies. This is useful when we
+    // try to clean up shared cache upload policies for non-application
+    // master tasks. See YARN-10398 for details.
+    if (policies == null || policies.size() == 0) {
+      return "";
+    }
+    StringBuilder sb = new StringBuilder();

Review comment:
       I know following code was mostly borrowed from the existing code, but since we are in Java 8 for Hadoop 3, should we simplify this a bit using this chance?
   
   ```
       Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
       Map.Entry<String, Boolean> e;
       if (it.hasNext()) {
         e = it.next();
         sb.append(e.getKey() + DELIM + e.getValue());
       }
       while (it.hasNext()) {
         e = it.next();
         sb.append("," + e.getKey() + DELIM + e.getValue());
       }
   ```
   can be shorter and clearer statement, for e.g.
   ```
       policies.forEach((k,v) -> sb.append(k).append(DELIM).append(v).append(","));
       sb.deleteCharAt(sb.length() - 1); // do we need this, or it is just fine?
   ```
   
   Thoughts?

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.

Review comment:
       nit: this sentence can be:
   ```
   // If no policy is provided, we will reset the config by setting an empty string value.
   ```

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.
+    // In other words, cleaning up existing policies. This is useful when we
+    // try to clean up shared cache upload policies for non-application
+    // master tasks. See YARN-10398 for details.
+    if (policies == null || policies.size() == 0) {
+      return "";
+    }
+    StringBuilder sb = new StringBuilder();

Review comment:
       I know following code was mostly borrowed from the existing code, but since we are in Java 8 for Hadoop 3, should we simplify this a bit using this chance?
   
   ```
       Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
       Map.Entry<String, Boolean> e;
       if (it.hasNext()) {
         e = it.next();
         sb.append(e.getKey() + DELIM + e.getValue());
       }
       while (it.hasNext()) {
         e = it.next();
         sb.append("," + e.getKey() + DELIM + e.getValue());
       }
   ```
   can be one statement (shorter and clearer)
   ```
      policies.forEach((k,v) -> sb.append(",").append(k).append(DELIM).append(v));
   ```
   
   Thoughts?

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.
+    // In other words, cleaning up existing policies. This is useful when we
+    // try to clean up shared cache upload policies for non-application
+    // master tasks. See YARN-10398 for details.

Review comment:
       Since this JIRA has been moved from YARN to MAPREDUCE project, should we replace the `YARN-10398` in comment with the new JIRA number `MAPREDUCE-7294`?

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.

Review comment:
       nit: this sentence can be:
   ```
       // If no policy is provided, we will reset the config by setting an empty string value.
   ```

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.
+    // In other words, cleaning up existing policies. This is useful when we
+    // try to clean up shared cache upload policies for non-application
+    // master tasks. See YARN-10398 for details.
+    if (policies == null || policies.size() == 0) {
+      return "";
+    }
+    StringBuilder sb = new StringBuilder();

Review comment:
       Also, given the code is so simple, maybe we can save one extra private helper method, and move the logic back to `setSharedCacheUploadPolicies()` method, which itself has only several lines of code.

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.
+    // In other words, cleaning up existing policies. This is useful when we
+    // try to clean up shared cache upload policies for non-application
+    // master tasks. See YARN-10398 for details.
+    if (policies == null || policies.size() == 0) {
+      return "";
+    }
+    StringBuilder sb = new StringBuilder();

Review comment:
       I know following code was mostly borrowed from the existing code, but since we are in Java 8 for Hadoop 3, should we simplify this a bit using this chance?
   
   ```
       Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
       Map.Entry<String, Boolean> e;
       if (it.hasNext()) {
         e = it.next();
         sb.append(e.getKey() + DELIM + e.getValue());
       }
       while (it.hasNext()) {
         e = it.next();
         sb.append("," + e.getKey() + DELIM + e.getValue());
       }
   ```
   can be shorter and clearer statement, for e.g.
   ```
       policies.forEach((k,v) -> sb.append(k).append(DELIM).append(v).append(","));
       sb.deleteCharAt(sb.length() - 1);
   ```
   
   Thoughts?

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.
+    // In other words, cleaning up existing policies. This is useful when we
+    // try to clean up shared cache upload policies for non-application
+    // master tasks. See YARN-10398 for details.
+    if (policies == null || policies.size() == 0) {
+      return "";
+    }
+    StringBuilder sb = new StringBuilder();

Review comment:
       I know following code was mostly borrowed from the existing code, but since we are in Java 8 for Hadoop 3, should we simplify this a bit using this chance?
   
   ```
       Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
       Map.Entry<String, Boolean> e;
       if (it.hasNext()) {
         e = it.next();
         sb.append(e.getKey() + DELIM + e.getValue());
       }
       while (it.hasNext()) {
         e = it.next();
         sb.append("," + e.getKey() + DELIM + e.getValue());
       }
   ```
   can be shorter and clearer statement, for e.g.
   ```
       policies.forEach((k,v) -> sb.append(k).append(DELIM).append(v).append(","));
       sb.deleteCharAt(sb.length() - 1); // do we need this, or it is just fine?
   ```
   
   Thoughts?

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.

Review comment:
       nit: this sentence can be:
   ```
   // If no policy is provided, we will reset the config by setting an empty string value.
   ```




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #2223: YARN-10398. Fix the bug to make sure only application master upload resource to Yarn Shared Cache

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on pull request #2223:
URL: https://github.com/apache/hadoop/pull/2223#issuecomment-673338966


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   1m 52s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 12s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  33m 43s |  trunk passed  |
   | +1 :green_heart: |  compile  |   3m  0s |  trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   2m 31s |  trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 57s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 30s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 39s |  branch has no errors when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 52s |  trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 38s |  Used deprecated FindBugs config; considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   2m 51s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 32s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 17s |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 51s |  the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   2m 51s |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 15s |  the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   2m 15s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 44s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 11s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  17m 44s |  patch has no errors when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 47s |  the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 43s |  the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   2m 48s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   7m 46s |  hadoop-mapreduce-client-core in the patch passed.  |
   | -1 :x: |  unit  |  23m  7s |  hadoop-mapreduce-client-app in the patch passed.  |
   | -1 :x: |  asflicense  |   0m 50s |  The patch generated 2 ASF License warnings.  |
   |  |   | 135m 56s |   |
   
   
   | Reason | Tests |
   |-------:|:------|
   | Failed junit tests | hadoop.mapreduce.v2.app.webapp.TestAMWebServices |
   |   | hadoop.mapreduce.v2.app.TestRuntimeEstimators |
   |   | hadoop.mapreduce.v2.app.webapp.TestAMWebServicesAttempts |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2223/1/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2223 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 7c9ebc773364 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / e592ec5f8bf |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | unit | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2223/1/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2223/1/testReport/ |
   | asflicense | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2223/1/artifact/out/patch-asflicense-problems.txt |
   | Max. process+thread count | 1134 (vs. ulimit of 5500) |
   | modules | C: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app U: hadoop-mapreduce-project/hadoop-mapreduce-client |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2223/1/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=4.0.6 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] liuml07 commented on a change in pull request #2223: MAPREDUCE-7294. Fix the bug to make sure only application master upload resource to Yarn Shared Cache

Posted by GitBox <gi...@apache.org>.
liuml07 commented on a change in pull request #2223:
URL: https://github.com/apache/hadoop/pull/2223#discussion_r487367770



##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.
+    // In other words, cleaning up existing policies. This is useful when we
+    // try to clean up shared cache upload policies for non-application
+    // master tasks. See YARN-10398 for details.
+    if (policies == null || policies.size() == 0) {
+      return "";
+    }
+    StringBuilder sb = new StringBuilder();

Review comment:
       I know following code was mostly borrowed from the existing code, but since we are in Java 8 for Hadoop 3, should we simplify this a bit using this chance?
   
   ```
       Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
       Map.Entry<String, Boolean> e;
       if (it.hasNext()) {
         e = it.next();
         sb.append(e.getKey() + DELIM + e.getValue());
       }
       while (it.hasNext()) {
         e = it.next();
         sb.append("," + e.getKey() + DELIM + e.getValue());
       }
   ```
   can be shorter and clearer statement, for e.g.
   ```
       policies.forEach((k,v) -> sb.append(k).append(DELIM).append(v).append(","));
       sb.deleteCharAt(sb.length() - 1); // do we need this, or it is just fine?
   ```
   
   Thoughts?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] liuml07 commented on a change in pull request #2223: MAPREDUCE-7294. Fix the bug to make sure only application master upload resource to Yarn Shared Cache

Posted by GitBox <gi...@apache.org>.
liuml07 commented on a change in pull request #2223:
URL: https://github.com/apache/hadoop/pull/2223#discussion_r487368189



##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.

Review comment:
       nit: this sentence can be:
   ```
   // If no policy is provided, we will reset the config by setting an empty string value.
   ```




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] JohnZZGithub commented on a change in pull request #2223: MAPREDUCE-7294. Fix the bug to make sure only application master upload resource to Yarn Shared Cache

Posted by GitBox <gi...@apache.org>.
JohnZZGithub commented on a change in pull request #2223:
URL: https://github.com/apache/hadoop/pull/2223#discussion_r487341619



##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {

Review comment:
       The bug is when the policies are empty it won't clean up the existing policies.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] liuml07 commented on a change in pull request #2223: MAPREDUCE-7294. Fix the bug to make sure only application master upload resource to Yarn Shared Cache

Posted by GitBox <gi...@apache.org>.
liuml07 commented on a change in pull request #2223:
URL: https://github.com/apache/hadoop/pull/2223#discussion_r487367770



##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
##########
@@ -1450,26 +1450,33 @@ public static void setArchiveSharedCacheUploadPolicies(Configuration conf,
    */
   private static void setSharedCacheUploadPolicies(Configuration conf,
       Map<String, Boolean> policies, boolean areFiles) {
-    if (policies != null) {
-      StringBuilder sb = new StringBuilder();
-      Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
-      Map.Entry<String, Boolean> e;
-      if (it.hasNext()) {
-        e = it.next();
-        sb.append(e.getKey() + DELIM + e.getValue());
-      } else {
-        // policies is an empty map, just skip setting the parameter
-        return;
-      }
-      while (it.hasNext()) {
-        e = it.next();
-        sb.append("," + e.getKey() + DELIM + e.getValue());
-      }
-      String confParam =
-          areFiles ? MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES
-              : MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
-      conf.set(confParam, sb.toString());
+    String confParam = areFiles ?
+        MRJobConfig.CACHE_FILES_SHARED_CACHE_UPLOAD_POLICIES :
+        MRJobConfig.CACHE_ARCHIVES_SHARED_CACHE_UPLOAD_POLICIES;
+    conf.set(confParam, populateSharedCacheUploadPolicies(policies));
+  }
+
+  private static String populateSharedCacheUploadPolicies(
+      Map<String, Boolean> policies) {
+    // If policies are an empty map or null, we will set EMPTY_STRING.
+    // In other words, cleaning up existing policies. This is useful when we
+    // try to clean up shared cache upload policies for non-application
+    // master tasks. See YARN-10398 for details.
+    if (policies == null || policies.size() == 0) {
+      return "";
+    }
+    StringBuilder sb = new StringBuilder();

Review comment:
       I know following code was mostly borrowed from the existing code, but since we are in Java 8 for Hadoop 3, should we simplify this a bit using this chance?
   
   ```
       Iterator<Map.Entry<String, Boolean>> it = policies.entrySet().iterator();
       Map.Entry<String, Boolean> e;
       if (it.hasNext()) {
         e = it.next();
         sb.append(e.getKey() + DELIM + e.getValue());
       }
       while (it.hasNext()) {
         e = it.next();
         sb.append("," + e.getKey() + DELIM + e.getValue());
       }
   ```
   can be shorter and clearer statement, for e.g.
   ```
       policies.forEach((k,v) -> sb.append(k).append(DELIM).append(v).append(","));
       sb.deleteCharAt(sb.length() - 1);
   ```
   
   Thoughts?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] JohnZZGithub commented on pull request #2223: YARN-10398. Fix the bug to make sure only application master upload resource to Yarn Shared Cache

Posted by GitBox <gi...@apache.org>.
JohnZZGithub commented on pull request #2223:
URL: https://github.com/apache/hadoop/pull/2223#issuecomment-675689487


   @steveloughran Could you please help review the patch? Thanks


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] JohnZZGithub commented on pull request #2223: MAPREDUCE-7294. Fix the bug to make sure only application master upload resource to Yarn Shared Cache

Posted by GitBox <gi...@apache.org>.
JohnZZGithub commented on pull request #2223:
URL: https://github.com/apache/hadoop/pull/2223#issuecomment-691360828


   @liuml07 Could you please review it when you get a chance? 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org