You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-commits@hadoop.apache.org by wa...@apache.org on 2017/08/11 17:31:41 UTC

[01/50] [abbrv] hadoop git commit: HADOOP-13963. /bin/bash is hard coded in some of the scripts. Contributed by Ajay Yadav.

Repository: hadoop
Updated Branches:
  refs/heads/YARN-5881 686a634f0 -> 95a819343


HADOOP-13963. /bin/bash is hard coded in some of the scripts. Contributed by Ajay Yadav.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a6fdeb8a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a6fdeb8a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a6fdeb8a

Branch: refs/heads/YARN-5881
Commit: a6fdeb8a872d413c76257a32914ade1d0e944583
Parents: 02bf328
Author: Arpit Agarwal <ar...@apache.org>
Authored: Fri Aug 4 10:40:52 2017 -0700
Committer: Arpit Agarwal <ar...@apache.org>
Committed: Fri Aug 4 10:40:52 2017 -0700

----------------------------------------------------------------------
 dev-support/docker/hadoop_env_checks.sh                            | 2 +-
 dev-support/findHangingTest.sh                                     | 2 +-
 dev-support/verify-xml.sh                                          | 2 +-
 .../src/test/scripts/hadoop-functions_test_helper.bash             | 2 +-
 start-build-env.sh                                                 | 2 +-
 5 files changed, 5 insertions(+), 5 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a6fdeb8a/dev-support/docker/hadoop_env_checks.sh
----------------------------------------------------------------------
diff --git a/dev-support/docker/hadoop_env_checks.sh b/dev-support/docker/hadoop_env_checks.sh
index 910c802..5cb4b2b 100755
--- a/dev-support/docker/hadoop_env_checks.sh
+++ b/dev-support/docker/hadoop_env_checks.sh
@@ -1,4 +1,4 @@
-#!/bin/bash
+#!/usr/bin/env bash
 
 # Licensed to the Apache Software Foundation (ASF) under one
 # or more contributor license agreements.  See the NOTICE file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a6fdeb8a/dev-support/findHangingTest.sh
----------------------------------------------------------------------
diff --git a/dev-support/findHangingTest.sh b/dev-support/findHangingTest.sh
index f7ebe47..fcda9ff 100644
--- a/dev-support/findHangingTest.sh
+++ b/dev-support/findHangingTest.sh
@@ -1,4 +1,4 @@
-#!/bin/bash
+#!/usr/bin/env bash
 ##
 # Licensed to the Apache Software Foundation (ASF) under one
 # or more contributor license agreements.  See the NOTICE file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a6fdeb8a/dev-support/verify-xml.sh
----------------------------------------------------------------------
diff --git a/dev-support/verify-xml.sh b/dev-support/verify-xml.sh
index abab4e6..9ef456a 100755
--- a/dev-support/verify-xml.sh
+++ b/dev-support/verify-xml.sh
@@ -1,4 +1,4 @@
-#!/bin/bash
+#!/usr/bin/env bash
 ##
 # Licensed to the Apache Software Foundation (ASF) under one
 # or more contributor license agreements.  See the NOTICE file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a6fdeb8a/hadoop-common-project/hadoop-common/src/test/scripts/hadoop-functions_test_helper.bash
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/test/scripts/hadoop-functions_test_helper.bash b/hadoop-common-project/hadoop-common/src/test/scripts/hadoop-functions_test_helper.bash
index 86608ed..fa34bdf 100755
--- a/hadoop-common-project/hadoop-common/src/test/scripts/hadoop-functions_test_helper.bash
+++ b/hadoop-common-project/hadoop-common/src/test/scripts/hadoop-functions_test_helper.bash
@@ -1,4 +1,4 @@
-#!/bin/bash
+#!/usr/bin/env bash
 # Licensed to the Apache Software Foundation (ASF) under one or more
 # contributor license agreements.  See the NOTICE file distributed with
 # this work for additional information regarding copyright ownership.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a6fdeb8a/start-build-env.sh
----------------------------------------------------------------------
diff --git a/start-build-env.sh b/start-build-env.sh
index 18e3a8c..94af7e4 100755
--- a/start-build-env.sh
+++ b/start-build-env.sh
@@ -1,4 +1,4 @@
-#!/bin/bash
+#!/usr/bin/env bash
 
 # Licensed to the Apache Software Foundation (ASF) under one or more
 # contributor license agreements.  See the NOTICE file distributed with


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[10/50] [abbrv] hadoop git commit: YARN-6957. Moving logging APIs over to slf4j in hadoop-yarn-server-sharedcachemanager. Contributed by Yeliang Cang.

Posted by wa...@apache.org.
YARN-6957. Moving logging APIs over to slf4j in hadoop-yarn-server-sharedcachemanager. Contributed by Yeliang Cang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b0fbf179
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b0fbf179
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b0fbf179

Branch: refs/heads/YARN-5881
Commit: b0fbf1796585900a37dc4d6a271c5b5b32e9a9da
Parents: 839e077
Author: Akira Ajisaka <aa...@apache.org>
Authored: Mon Aug 7 19:25:40 2017 +0900
Committer: Akira Ajisaka <aa...@apache.org>
Committed: Mon Aug 7 19:25:40 2017 +0900

----------------------------------------------------------------------
 .../yarn/server/sharedcachemanager/CleanerService.java      | 7 ++++---
 .../hadoop/yarn/server/sharedcachemanager/CleanerTask.java  | 7 ++++---
 .../server/sharedcachemanager/ClientProtocolService.java    | 7 ++++---
 .../server/sharedcachemanager/SCMAdminProtocolService.java  | 8 ++++----
 .../yarn/server/sharedcachemanager/SharedCacheManager.java  | 9 +++++----
 .../server/sharedcachemanager/metrics/CleanerMetrics.java   | 7 ++++---
 .../server/sharedcachemanager/metrics/ClientSCMMetrics.java | 7 ++++---
 .../metrics/SharedCacheUploaderMetrics.java                 | 8 ++++----
 .../server/sharedcachemanager/store/InMemorySCMStore.java   | 7 ++++---
 .../yarn/server/sharedcachemanager/webapp/SCMWebServer.java | 7 ++++---
 10 files changed, 41 insertions(+), 33 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b0fbf179/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/src/main/java/org/apache/hadoop/yarn/server/sharedcachemanager/CleanerService.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/src/main/java/org/apache/hadoop/yarn/server/sharedcachemanager/CleanerService.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/src/main/java/org/apache/hadoop/yarn/server/sharedcachemanager/CleanerService.java
index 60fc3e5..bcdc46b 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/src/main/java/org/apache/hadoop/yarn/server/sharedcachemanager/CleanerService.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/src/main/java/org/apache/hadoop/yarn/server/sharedcachemanager/CleanerService.java
@@ -26,8 +26,6 @@ import java.util.concurrent.TimeUnit;
 import java.util.concurrent.locks.Lock;
 import java.util.concurrent.locks.ReentrantLock;
 
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.HadoopIllegalArgumentException;
 import org.apache.hadoop.classification.InterfaceAudience.Private;
 import org.apache.hadoop.classification.InterfaceStability.Evolving;
@@ -43,6 +41,8 @@ import org.apache.hadoop.yarn.server.sharedcachemanager.metrics.CleanerMetrics;
 import org.apache.hadoop.yarn.server.sharedcachemanager.store.SCMStore;
 
 import com.google.common.util.concurrent.ThreadFactoryBuilder;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 /**
  * The cleaner service that maintains the shared cache area, and cleans up stale
@@ -57,7 +57,8 @@ public class CleanerService extends CompositeService {
    */
   public static final String GLOBAL_CLEANER_PID = ".cleaner_pid";
 
-  private static final Log LOG = LogFactory.getLog(CleanerService.class);
+  private static final Logger LOG =
+      LoggerFactory.getLogger(CleanerService.class);
 
   private Configuration conf;
   private CleanerMetrics metrics;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b0fbf179/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/src/main/java/org/apache/hadoop/yarn/server/sharedcachemanager/CleanerTask.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/src/main/java/org/apache/hadoop/yarn/server/sharedcachemanager/CleanerTask.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/src/main/java/org/apache/hadoop/yarn/server/sharedcachemanager/CleanerTask.java
index a7fdcbd..3e0a62b 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/src/main/java/org/apache/hadoop/yarn/server/sharedcachemanager/CleanerTask.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/src/main/java/org/apache/hadoop/yarn/server/sharedcachemanager/CleanerTask.java
@@ -21,8 +21,6 @@ package org.apache.hadoop.yarn.server.sharedcachemanager;
 import java.io.IOException;
 import java.util.concurrent.locks.Lock;
 
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.classification.InterfaceAudience.Private;
 import org.apache.hadoop.classification.InterfaceStability.Evolving;
 import org.apache.hadoop.conf.Configuration;
@@ -34,6 +32,8 @@ import org.apache.hadoop.yarn.exceptions.YarnException;
 import org.apache.hadoop.yarn.server.sharedcache.SharedCacheUtil;
 import org.apache.hadoop.yarn.server.sharedcachemanager.metrics.CleanerMetrics;
 import org.apache.hadoop.yarn.server.sharedcachemanager.store.SCMStore;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 /**
  * The task that runs and cleans up the shared cache area for stale entries and
@@ -44,7 +44,8 @@ import org.apache.hadoop.yarn.server.sharedcachemanager.store.SCMStore;
 @Evolving
 class CleanerTask implements Runnable {
   private static final String RENAMED_SUFFIX = "-renamed";
-  private static final Log LOG = LogFactory.getLog(CleanerTask.class);
+  private static final Logger LOG =
+      LoggerFactory.getLogger(CleanerTask.class);
 
   private final String location;
   private final long sleepTime;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b0fbf179/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/src/main/java/org/apache/hadoop/yarn/server/sharedcachemanager/ClientProtocolService.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/src/main/java/org/apache/hadoop/yarn/server/sharedcachemanager/ClientProtocolService.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/src/main/java/org/apache/hadoop/yarn/server/sharedcachemanager/ClientProtocolService.java
index 1dcca6c..4275674 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/src/main/java/org/apache/hadoop/yarn/server/sharedcachemanager/ClientProtocolService.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/src/main/java/org/apache/hadoop/yarn/server/sharedcachemanager/ClientProtocolService.java
@@ -21,8 +21,6 @@ package org.apache.hadoop.yarn.server.sharedcachemanager;
 import java.io.IOException;
 import java.net.InetSocketAddress;
 
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.classification.InterfaceAudience.Private;
 import org.apache.hadoop.classification.InterfaceStability.Evolving;
 import org.apache.hadoop.conf.Configuration;
@@ -45,6 +43,8 @@ import org.apache.hadoop.yarn.server.sharedcache.SharedCacheUtil;
 import org.apache.hadoop.yarn.server.sharedcachemanager.metrics.ClientSCMMetrics;
 import org.apache.hadoop.yarn.server.sharedcachemanager.store.SCMStore;
 import org.apache.hadoop.yarn.server.sharedcachemanager.store.SharedCacheResourceReference;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 /**
  * This service handles all rpc calls from the client to the shared cache
@@ -55,7 +55,8 @@ import org.apache.hadoop.yarn.server.sharedcachemanager.store.SharedCacheResourc
 public class ClientProtocolService extends AbstractService implements
     ClientSCMProtocol {
 
-  private static final Log LOG = LogFactory.getLog(ClientProtocolService.class);
+  private static final Logger LOG =
+      LoggerFactory.getLogger(ClientProtocolService.class);
 
   private final RecordFactory recordFactory = RecordFactoryProvider
       .getRecordFactory(null);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b0fbf179/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/src/main/java/org/apache/hadoop/yarn/server/sharedcachemanager/SCMAdminProtocolService.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/src/main/java/org/apache/hadoop/yarn/server/sharedcachemanager/SCMAdminProtocolService.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/src/main/java/org/apache/hadoop/yarn/server/sharedcachemanager/SCMAdminProtocolService.java
index 6f2baf6..e6a885b 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/src/main/java/org/apache/hadoop/yarn/server/sharedcachemanager/SCMAdminProtocolService.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/src/main/java/org/apache/hadoop/yarn/server/sharedcachemanager/SCMAdminProtocolService.java
@@ -21,15 +21,12 @@ package org.apache.hadoop.yarn.server.sharedcachemanager;
 import java.io.IOException;
 import java.net.InetSocketAddress;
 
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.classification.InterfaceAudience.Private;
 import org.apache.hadoop.classification.InterfaceStability.Unstable;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.ipc.Server;
 import org.apache.hadoop.security.AccessControlException;
 import org.apache.hadoop.security.UserGroupInformation;
-import org.apache.hadoop.security.authorize.AccessControlList;
 import org.apache.hadoop.service.AbstractService;
 import org.apache.hadoop.yarn.security.YarnAuthorizationProvider;
 import org.apache.hadoop.yarn.server.api.SCMAdminProtocol;
@@ -41,6 +38,8 @@ import org.apache.hadoop.yarn.factories.RecordFactory;
 import org.apache.hadoop.yarn.factory.providers.RecordFactoryProvider;
 import org.apache.hadoop.yarn.ipc.RPCUtil;
 import org.apache.hadoop.yarn.ipc.YarnRPC;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 /**
  * This service handles all SCMAdminProtocol rpc calls from administrators
@@ -51,7 +50,8 @@ import org.apache.hadoop.yarn.ipc.YarnRPC;
 public class SCMAdminProtocolService extends AbstractService implements
     SCMAdminProtocol {
 
-  private static final Log LOG = LogFactory.getLog(SCMAdminProtocolService.class);
+  private static final Logger LOG =
+      LoggerFactory.getLogger(SCMAdminProtocolService.class);
 
   private final RecordFactory recordFactory = RecordFactoryProvider
       .getRecordFactory(null);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b0fbf179/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/src/main/java/org/apache/hadoop/yarn/server/sharedcachemanager/SharedCacheManager.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/src/main/java/org/apache/hadoop/yarn/server/sharedcachemanager/SharedCacheManager.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/src/main/java/org/apache/hadoop/yarn/server/sharedcachemanager/SharedCacheManager.java
index 331e29e..ca683f2 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/src/main/java/org/apache/hadoop/yarn/server/sharedcachemanager/SharedCacheManager.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/src/main/java/org/apache/hadoop/yarn/server/sharedcachemanager/SharedCacheManager.java
@@ -18,8 +18,6 @@
 
 package org.apache.hadoop.yarn.server.sharedcachemanager;
 
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.classification.InterfaceAudience.Private;
 import org.apache.hadoop.classification.InterfaceStability.Unstable;
 import org.apache.hadoop.conf.Configuration;
@@ -36,6 +34,8 @@ import org.apache.hadoop.yarn.server.sharedcachemanager.store.SCMStore;
 import org.apache.hadoop.yarn.server.sharedcachemanager.webapp.SCMWebServer;
 
 import com.google.common.annotations.VisibleForTesting;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 /**
  * This service maintains the shared cache meta data. It handles claiming and
@@ -51,7 +51,8 @@ public class SharedCacheManager extends CompositeService {
    */
   public static final int SHUTDOWN_HOOK_PRIORITY = 30;
 
-  private static final Log LOG = LogFactory.getLog(SharedCacheManager.class);
+  private static final Logger LOG =
+      LoggerFactory.getLogger(SharedCacheManager.class);
 
   private SCMStore store;
 
@@ -156,7 +157,7 @@ public class SharedCacheManager extends CompositeService {
       sharedCacheManager.init(conf);
       sharedCacheManager.start();
     } catch (Throwable t) {
-      LOG.fatal("Error starting SharedCacheManager", t);
+      LOG.error("Error starting SharedCacheManager", t);
       System.exit(-1);
     }
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b0fbf179/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/src/main/java/org/apache/hadoop/yarn/server/sharedcachemanager/metrics/CleanerMetrics.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/src/main/java/org/apache/hadoop/yarn/server/sharedcachemanager/metrics/CleanerMetrics.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/src/main/java/org/apache/hadoop/yarn/server/sharedcachemanager/metrics/CleanerMetrics.java
index b86a469..55cb074 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/src/main/java/org/apache/hadoop/yarn/server/sharedcachemanager/metrics/CleanerMetrics.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/src/main/java/org/apache/hadoop/yarn/server/sharedcachemanager/metrics/CleanerMetrics.java
@@ -17,8 +17,6 @@
  */
 package org.apache.hadoop.yarn.server.sharedcachemanager.metrics;
 
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.classification.InterfaceAudience.Private;
 import org.apache.hadoop.classification.InterfaceStability.Evolving;
 import org.apache.hadoop.metrics2.MetricsSource;
@@ -31,6 +29,8 @@ import org.apache.hadoop.metrics2.lib.MetricsRegistry;
 import org.apache.hadoop.metrics2.lib.MetricsSourceBuilder;
 import org.apache.hadoop.metrics2.lib.MutableCounterLong;
 import org.apache.hadoop.metrics2.lib.MutableGaugeLong;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 /**
  * This class is for maintaining the various Cleaner activity statistics and
@@ -40,7 +40,8 @@ import org.apache.hadoop.metrics2.lib.MutableGaugeLong;
 @Evolving
 @Metrics(name = "CleanerActivity", about = "Cleaner service metrics", context = "yarn")
 public class CleanerMetrics {
-  public static final Log LOG = LogFactory.getLog(CleanerMetrics.class);
+  public static final Logger LOG =
+      LoggerFactory.getLogger(CleanerMetrics.class);
   private final MetricsRegistry registry = new MetricsRegistry("cleaner");
   private final static CleanerMetrics INSTANCE = create();
   

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b0fbf179/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/src/main/java/org/apache/hadoop/yarn/server/sharedcachemanager/metrics/ClientSCMMetrics.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/src/main/java/org/apache/hadoop/yarn/server/sharedcachemanager/metrics/ClientSCMMetrics.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/src/main/java/org/apache/hadoop/yarn/server/sharedcachemanager/metrics/ClientSCMMetrics.java
index fe960c6..6b45745 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/src/main/java/org/apache/hadoop/yarn/server/sharedcachemanager/metrics/ClientSCMMetrics.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/src/main/java/org/apache/hadoop/yarn/server/sharedcachemanager/metrics/ClientSCMMetrics.java
@@ -17,8 +17,6 @@
  */
 package org.apache.hadoop.yarn.server.sharedcachemanager.metrics;
 
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.classification.InterfaceAudience.Private;
 import org.apache.hadoop.classification.InterfaceStability.Unstable;
 import org.apache.hadoop.metrics2.MetricsSystem;
@@ -27,6 +25,8 @@ import org.apache.hadoop.metrics2.annotation.Metrics;
 import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem;
 import org.apache.hadoop.metrics2.lib.MetricsRegistry;
 import org.apache.hadoop.metrics2.lib.MutableCounterLong;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 /**
  * This class is for maintaining  client requests metrics
@@ -37,7 +37,8 @@ import org.apache.hadoop.metrics2.lib.MutableCounterLong;
 @Metrics(about="Client SCM metrics", context="yarn")
 public class ClientSCMMetrics {
 
-  private static final Log LOG = LogFactory.getLog(ClientSCMMetrics.class);
+  private static final Logger LOG =
+      LoggerFactory.getLogger(ClientSCMMetrics.class);
   final MetricsRegistry registry;
   private final static ClientSCMMetrics INSTANCE = create();
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b0fbf179/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/src/main/java/org/apache/hadoop/yarn/server/sharedcachemanager/metrics/SharedCacheUploaderMetrics.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/src/main/java/org/apache/hadoop/yarn/server/sharedcachemanager/metrics/SharedCacheUploaderMetrics.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/src/main/java/org/apache/hadoop/yarn/server/sharedcachemanager/metrics/SharedCacheUploaderMetrics.java
index 7fff13a..3cf6632 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/src/main/java/org/apache/hadoop/yarn/server/sharedcachemanager/metrics/SharedCacheUploaderMetrics.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/src/main/java/org/apache/hadoop/yarn/server/sharedcachemanager/metrics/SharedCacheUploaderMetrics.java
@@ -17,8 +17,6 @@
  */
 package org.apache.hadoop.yarn.server.sharedcachemanager.metrics;
 
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.classification.InterfaceAudience.Private;
 import org.apache.hadoop.classification.InterfaceStability.Evolving;
 import org.apache.hadoop.metrics2.MetricsSystem;
@@ -27,6 +25,8 @@ import org.apache.hadoop.metrics2.annotation.Metrics;
 import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem;
 import org.apache.hadoop.metrics2.lib.MetricsRegistry;
 import org.apache.hadoop.metrics2.lib.MutableCounterLong;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 /**
  * This class is for maintaining shared cache uploader requests metrics
@@ -37,8 +37,8 @@ import org.apache.hadoop.metrics2.lib.MutableCounterLong;
 @Metrics(about="shared cache upload metrics", context="yarn")
 public class SharedCacheUploaderMetrics {
 
-  static final Log LOG =
-      LogFactory.getLog(SharedCacheUploaderMetrics.class);
+  static final Logger LOG =
+      LoggerFactory.getLogger(SharedCacheUploaderMetrics.class);
   final MetricsRegistry registry;
   private final static SharedCacheUploaderMetrics INSTANCE = create();
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b0fbf179/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/src/main/java/org/apache/hadoop/yarn/server/sharedcachemanager/store/InMemorySCMStore.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/src/main/java/org/apache/hadoop/yarn/server/sharedcachemanager/store/InMemorySCMStore.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/src/main/java/org/apache/hadoop/yarn/server/sharedcachemanager/store/InMemorySCMStore.java
index 7b769a7..d917d9b 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/src/main/java/org/apache/hadoop/yarn/server/sharedcachemanager/store/InMemorySCMStore.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/src/main/java/org/apache/hadoop/yarn/server/sharedcachemanager/store/InMemorySCMStore.java
@@ -33,8 +33,6 @@ import java.util.concurrent.ScheduledExecutorService;
 import java.util.concurrent.ThreadFactory;
 import java.util.concurrent.TimeUnit;
 
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.HadoopIllegalArgumentException;
 import org.apache.hadoop.classification.InterfaceAudience.Private;
 import org.apache.hadoop.classification.InterfaceStability.Evolving;
@@ -52,6 +50,8 @@ import org.apache.hadoop.yarn.server.sharedcachemanager.AppChecker;
 
 import com.google.common.annotations.VisibleForTesting;
 import com.google.common.util.concurrent.ThreadFactoryBuilder;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 /**
  * A thread safe version of an in-memory SCM store. The thread safety is
@@ -74,7 +74,8 @@ import com.google.common.util.concurrent.ThreadFactoryBuilder;
 @Private
 @Evolving
 public class InMemorySCMStore extends SCMStore {
-  private static final Log LOG = LogFactory.getLog(InMemorySCMStore.class);
+  private static final Logger LOG =
+      LoggerFactory.getLogger(InMemorySCMStore.class);
 
   private final Map<String, SharedCacheResource> cachedResources =
       new ConcurrentHashMap<String, SharedCacheResource>();

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b0fbf179/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/src/main/java/org/apache/hadoop/yarn/server/sharedcachemanager/webapp/SCMWebServer.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/src/main/java/org/apache/hadoop/yarn/server/sharedcachemanager/webapp/SCMWebServer.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/src/main/java/org/apache/hadoop/yarn/server/sharedcachemanager/webapp/SCMWebServer.java
index b81ed29..7984090 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/src/main/java/org/apache/hadoop/yarn/server/sharedcachemanager/webapp/SCMWebServer.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/src/main/java/org/apache/hadoop/yarn/server/sharedcachemanager/webapp/SCMWebServer.java
@@ -18,8 +18,6 @@
 
 package org.apache.hadoop.yarn.server.sharedcachemanager.webapp;
 
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.classification.InterfaceAudience.Private;
 import org.apache.hadoop.classification.InterfaceStability.Unstable;
 import org.apache.hadoop.conf.Configuration;
@@ -28,6 +26,8 @@ import org.apache.hadoop.yarn.conf.YarnConfiguration;
 import org.apache.hadoop.yarn.server.sharedcachemanager.SharedCacheManager;
 import org.apache.hadoop.yarn.webapp.WebApp;
 import org.apache.hadoop.yarn.webapp.WebApps;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 /**
  * A very simple web interface for the metrics reported by
@@ -37,7 +37,8 @@ import org.apache.hadoop.yarn.webapp.WebApps;
 @Private
 @Unstable
 public class SCMWebServer extends AbstractService {
-  private static final Log LOG = LogFactory.getLog(SCMWebServer.class);
+  private static final Logger LOG =
+      LoggerFactory.getLogger(SCMWebServer.class);
 
   private final SharedCacheManager scm;
   private WebApp webApp;


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[49/50] [abbrv] hadoop git commit: YARN-6471. Support to add min/max resource configuration for a queue. (Sunil G via wangda)

Posted by wa...@apache.org.
http://git-wip-us.apache.org/repos/asf/hadoop/blob/95a81934/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/ParentQueue.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/ParentQueue.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/ParentQueue.java
index f6ada4f..5b529d6 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/ParentQueue.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/ParentQueue.java
@@ -34,7 +34,6 @@ import org.apache.hadoop.yarn.api.records.QueueInfo;
 import org.apache.hadoop.yarn.api.records.QueueState;
 import org.apache.hadoop.yarn.api.records.QueueUserACLInfo;
 import org.apache.hadoop.yarn.api.records.Resource;
-import org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException;
 import org.apache.hadoop.yarn.factories.RecordFactory;
 import org.apache.hadoop.yarn.factory.providers.RecordFactoryProvider;
 import org.apache.hadoop.yarn.security.AccessType;
@@ -45,7 +44,6 @@ import org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerStat
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.ActiveUsersManager;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.NodeType;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceLimits;
-import org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedContainerChangeRequest;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.activities.ActivitiesLogger;
@@ -60,6 +58,7 @@ import org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaS
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.placement.PlacementSet;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.placement.PlacementSetUtils;
+import org.apache.hadoop.yarn.util.resource.ResourceCalculator;
 import org.apache.hadoop.yarn.util.resource.Resources;
 
 import java.io.IOException;
@@ -69,6 +68,7 @@ import java.util.HashMap;
 import java.util.Iterator;
 import java.util.List;
 import java.util.Map;
+import java.util.Set;
 
 @Private
 @Evolving
@@ -163,31 +163,78 @@ public class ParentQueue extends AbstractCSQueue {
       writeLock.lock();
       // Validate
       float childCapacities = 0;
+      Resource minResDefaultLabel = Resources.createResource(0, 0);
       for (CSQueue queue : childQueues) {
         childCapacities += queue.getCapacity();
+        Resources.addTo(minResDefaultLabel, queue.getQueueResourceQuotas()
+            .getConfiguredMinResource());
+
+        // If any child queue is using percentage based capacity model vs parent
+        // queues' absolute configuration or vice versa, throw back an
+        // exception.
+        if (!queueName.equals("root") && getCapacity() != 0f
+            && !queue.getQueueResourceQuotas().getConfiguredMinResource()
+                .equals(Resources.none())) {
+          throw new IllegalArgumentException("Parent queue '" + getQueueName()
+              + "' and child queue '" + queue.getQueueName()
+              + "' should use either percentage based capacity"
+              + " configuration or absolute resource together.");
+        }
       }
+
       float delta = Math.abs(1.0f - childCapacities);  // crude way to check
       // allow capacities being set to 0, and enforce child 0 if parent is 0
-      if (((queueCapacities.getCapacity() > 0) && (delta > PRECISION)) || (
-          (queueCapacities.getCapacity() == 0) && (childCapacities > 0))) {
-        throw new IllegalArgumentException(
-            "Illegal" + " capacity of " + childCapacities
-                + " for children of queue " + queueName);
+      if ((minResDefaultLabel.equals(Resources.none())
+          && (queueCapacities.getCapacity() > 0) && (delta > PRECISION))
+          || ((queueCapacities.getCapacity() == 0) && (childCapacities > 0))) {
+        throw new IllegalArgumentException("Illegal" + " capacity of "
+            + childCapacities + " for children of queue " + queueName);
       }
       // check label capacities
       for (String nodeLabel : queueCapacities.getExistingNodeLabels()) {
         float capacityByLabel = queueCapacities.getCapacity(nodeLabel);
         // check children's labels
         float sum = 0;
+        Resource minRes = Resources.createResource(0, 0);
+        Resource resourceByLabel = labelManager.getResourceByLabel(nodeLabel,
+            scheduler.getClusterResource());
         for (CSQueue queue : childQueues) {
           sum += queue.getQueueCapacities().getCapacity(nodeLabel);
+
+          // If any child queue of a label is using percentage based capacity
+          // model vs parent queues' absolute configuration or vice versa, throw
+          // back an exception
+          if (!queueName.equals("root") && !this.capacityConfigType
+              .equals(queue.getCapacityConfigType())) {
+            throw new IllegalArgumentException("Parent queue '" + getQueueName()
+                + "' and child queue '" + queue.getQueueName()
+                + "' should use either percentage based capacity"
+                + "configuration or absolute resource together for label:"
+                + nodeLabel);
+          }
+
+          // Accumulate all min/max resource configured for all child queues.
+          Resources.addTo(minRes, queue.getQueueResourceQuotas()
+              .getConfiguredMinResource(nodeLabel));
         }
-        if ((capacityByLabel > 0 && Math.abs(1.0f - sum) > PRECISION)
+        if ((minResDefaultLabel.equals(Resources.none()) && capacityByLabel > 0
+            && Math.abs(1.0f - sum) > PRECISION)
             || (capacityByLabel == 0) && (sum > 0)) {
           throw new IllegalArgumentException(
               "Illegal" + " capacity of " + sum + " for children of queue "
                   + queueName + " for label=" + nodeLabel);
         }
+
+        // Ensure that for each parent queue: parent.min-resource >=
+        // Σ(child.min-resource).
+        Resource parentMinResource = queueResourceQuotas
+            .getConfiguredMinResource(nodeLabel);
+        if (!parentMinResource.equals(Resources.none()) && Resources.lessThan(
+            resourceCalculator, resourceByLabel, parentMinResource, minRes)) {
+          throw new IllegalArgumentException("Parent Queues" + " capacity: "
+              + parentMinResource + " is less than" + " to its children:"
+              + minRes + " for queue:" + queueName);
+        }
       }
 
       this.childQueues.clear();
@@ -690,11 +737,8 @@ public class ParentQueue extends AbstractCSQueue {
         child.getQueueResourceUsage().getUsed(nodePartition));
 
     // Get child's max resource
-    Resource childConfiguredMaxResource = Resources.multiplyAndNormalizeDown(
-        resourceCalculator,
-        labelManager.getResourceByLabel(nodePartition, clusterResource),
-        child.getQueueCapacities().getAbsoluteMaximumCapacity(nodePartition),
-        minimumAllocation);
+    Resource childConfiguredMaxResource = getEffectiveMaxCapacityDown(
+        nodePartition, minimumAllocation);
 
     // Child's limit should be capped by child configured max resource
     childLimit =
@@ -830,6 +874,14 @@ public class ParentQueue extends AbstractCSQueue {
       ResourceLimits resourceLimits) {
     try {
       writeLock.lock();
+
+      // Update effective capacity in all parent queue.
+      Set<String> configuredNodelabels = csContext.getConfiguration()
+          .getConfiguredNodeLabels(getQueuePath());
+      for (String label : configuredNodelabels) {
+        calculateEffectiveResourcesAndCapacity(label, clusterResource);
+      }
+
       // Update all children
       for (CSQueue childQueue : childQueues) {
         // Get ResourceLimits of child queue before assign containers
@@ -851,6 +903,110 @@ public class ParentQueue extends AbstractCSQueue {
     return true;
   }
 
+  private void calculateEffectiveResourcesAndCapacity(String label,
+      Resource clusterResource) {
+
+    // For root queue, ensure that max/min resource is updated to latest
+    // cluster resource.
+    Resource resourceByLabel = labelManager.getResourceByLabel(label,
+        clusterResource);
+    if (getQueueName().equals("root")) {
+      queueResourceQuotas.setConfiguredMinResource(label, resourceByLabel);
+      queueResourceQuotas.setConfiguredMaxResource(label, resourceByLabel);
+      queueResourceQuotas.setEffectiveMinResource(label, resourceByLabel);
+      queueResourceQuotas.setEffectiveMaxResource(label, resourceByLabel);
+    }
+
+    // Total configured min resources of direct children of queue
+    Resource configuredMinResources = Resource.newInstance(0L, 0);
+    for (CSQueue childQueue : getChildQueues()) {
+      Resources.addTo(configuredMinResources,
+          childQueue.getQueueResourceQuotas().getConfiguredMinResource(label));
+    }
+
+    // Factor to scale down effective resource: When cluster has sufficient
+    // resources, effective_min_resources will be same as configured
+    // min_resources.
+    float effectiveMinRatio = 1;
+    ResourceCalculator rc = this.csContext.getResourceCalculator();
+    if (getQueueName().equals("root")) {
+      if (!resourceByLabel.equals(Resources.none()) && Resources.lessThan(rc,
+          clusterResource, resourceByLabel, configuredMinResources)) {
+        effectiveMinRatio = Resources.divide(rc, clusterResource,
+            resourceByLabel, configuredMinResources);
+      }
+    } else {
+      if (Resources.lessThan(rc, clusterResource,
+          queueResourceQuotas.getEffectiveMinResource(label),
+          configuredMinResources)) {
+        effectiveMinRatio = Resources.divide(rc, clusterResource,
+            queueResourceQuotas.getEffectiveMinResource(label),
+            configuredMinResources);
+      }
+    }
+
+    // loop and do this for all child queues
+    for (CSQueue childQueue : getChildQueues()) {
+      Resource minResource = childQueue.getQueueResourceQuotas()
+          .getConfiguredMinResource(label);
+
+      // Update effective resource (min/max) to each child queue.
+      if (childQueue.getCapacityConfigType()
+          .equals(CapacityConfigType.ABSOLUTE_RESOURCE)) {
+        childQueue.getQueueResourceQuotas().setEffectiveMinResource(label,
+            Resources.multiply(minResource, effectiveMinRatio));
+
+        // Max resource of a queue should be a minimum of {configuredMaxRes,
+        // parentMaxRes}. parentMaxRes could be configured value. But if not
+        // present could also be taken from effective max resource of parent.
+        Resource parentMaxRes = queueResourceQuotas
+            .getConfiguredMaxResource(label);
+        if (parentMaxRes.equals(Resources.none())) {
+          parentMaxRes = parent.getQueueResourceQuotas()
+              .getEffectiveMaxResource(label);
+        }
+
+        // Minimum of {childMaxResource, parentMaxRes}. However if
+        // childMaxResource is empty, consider parent's max resource alone.
+        Resource childMaxResource = childQueue.getQueueResourceQuotas()
+            .getConfiguredMaxResource(label);
+        Resource effMaxResource = Resources.min(resourceCalculator,
+            resourceByLabel, childMaxResource.equals(Resources.none())
+                ? parentMaxRes
+                : childMaxResource,
+            parentMaxRes);
+        childQueue.getQueueResourceQuotas().setEffectiveMaxResource(label,
+            Resources.clone(effMaxResource));
+      } else {
+        childQueue.getQueueResourceQuotas().setEffectiveMinResource(label,
+            Resources.multiply(resourceByLabel,
+                childQueue.getQueueCapacities().getAbsoluteCapacity(label)));
+        childQueue.getQueueResourceQuotas().setEffectiveMaxResource(label,
+            Resources.multiply(resourceByLabel, childQueue.getQueueCapacities()
+                .getAbsoluteMaximumCapacity(label)));
+
+        childQueue.getQueueResourceQuotas().setEffectiveMinResourceUp(label,
+            Resources.multiplyAndNormalizeUp(rc, resourceByLabel,
+                childQueue.getQueueCapacities().getAbsoluteCapacity(label),
+                minimumAllocation));
+        childQueue.getQueueResourceQuotas().setEffectiveMaxResourceUp(label,
+            Resources.multiplyAndNormalizeUp(rc,
+                resourceByLabel, childQueue.getQueueCapacities()
+                    .getAbsoluteMaximumCapacity(label),
+                    minimumAllocation));
+      }
+
+      if (LOG.isDebugEnabled()) {
+        LOG.debug("Updating effective min resource for queue:"
+            + childQueue.getQueueName() + " as effMinResource="
+            + childQueue.getQueueResourceQuotas().getEffectiveMinResource(label)
+            + "and Updating effective max resource as effMaxResource="
+            + childQueue.getQueueResourceQuotas()
+                .getEffectiveMaxResource(label));
+      }
+    }
+  }
+
   @Override
   public List<CSQueue> getChildQueues() {
     try {
@@ -980,9 +1136,21 @@ public class ParentQueue extends AbstractCSQueue {
        * When this happens, we have to preempt killable container (on same or different
        * nodes) of parent queue to avoid violating parent's max resource.
        */
-      if (getQueueCapacities().getAbsoluteMaximumCapacity(nodePartition)
-          < getQueueCapacities().getAbsoluteUsedCapacity(nodePartition)) {
-        killContainersToEnforceMaxQueueCapacity(nodePartition, clusterResource);
+      if (!queueResourceQuotas.getEffectiveMaxResource(nodePartition)
+          .equals(Resources.none())) {
+        if (Resources.lessThan(resourceCalculator, clusterResource,
+            queueResourceQuotas.getEffectiveMaxResource(nodePartition),
+            queueUsage.getUsed(nodePartition))) {
+          killContainersToEnforceMaxQueueCapacity(nodePartition,
+              clusterResource);
+        }
+      } else {
+        if (getQueueCapacities()
+            .getAbsoluteMaximumCapacity(nodePartition) < getQueueCapacities()
+                .getAbsoluteUsedCapacity(nodePartition)) {
+          killContainersToEnforceMaxQueueCapacity(nodePartition,
+              clusterResource);
+        }
       }
     } finally {
       writeLock.unlock();
@@ -999,8 +1167,7 @@ public class ParentQueue extends AbstractCSQueue {
 
     Resource partitionResource = labelManager.getResourceByLabel(partition,
         null);
-    Resource maxResource = Resources.multiply(partitionResource,
-        getQueueCapacities().getAbsoluteMaximumCapacity(partition));
+    Resource maxResource = getEffectiveMaxCapacity(partition);
 
     while (Resources.greaterThan(resourceCalculator, partitionResource,
         queueUsage.getUsed(partition), maxResource)) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/95a81934/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/UsersManager.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/UsersManager.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/UsersManager.java
index 5f7d185..a066a35 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/UsersManager.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/UsersManager.java
@@ -686,10 +686,7 @@ public class UsersManager implements AbstractUsersManager {
      * * If we're running over capacity, then its (usedResources + required)
      * (which extra resources we are allocating)
      */
-    Resource queueCapacity = Resources.multiplyAndNormalizeUp(
-        resourceCalculator, partitionResource,
-        lQueue.getQueueCapacities().getAbsoluteCapacity(nodePartition),
-        lQueue.getMinimumAllocation());
+    Resource queueCapacity = lQueue.getEffectiveCapacityUp(nodePartition);
 
     /*
      * Assume we have required resource equals to minimumAllocation, this can

http://git-wip-us.apache.org/repos/asf/hadoop/blob/95a81934/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/policy/PriorityUtilizationQueueOrderingPolicy.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/policy/PriorityUtilizationQueueOrderingPolicy.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/policy/PriorityUtilizationQueueOrderingPolicy.java
index 0544387..4985a1a 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/policy/PriorityUtilizationQueueOrderingPolicy.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/policy/PriorityUtilizationQueueOrderingPolicy.java
@@ -20,9 +20,11 @@ package org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.policy;
 
 import com.google.common.annotations.VisibleForTesting;
 import org.apache.commons.lang3.StringUtils;
+import org.apache.hadoop.yarn.api.records.Resource;
 import org.apache.hadoop.yarn.server.resourcemanager.nodelabels.RMNodeLabelsManager;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CSQueue;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration;
+import org.apache.hadoop.yarn.util.resource.Resources;
 
 import java.util.ArrayList;
 import java.util.Collections;
@@ -121,6 +123,15 @@ public class PriorityUtilizationQueueOrderingPolicy implements QueueOrderingPoli
       // For queue with same used ratio / priority, queue with higher configured
       // capacity goes first
       if (0 == rc) {
+        Resource minEffRes1 = q1.getQueueResourceQuotas()
+            .getConfiguredMinResource(p);
+        Resource minEffRes2 = q2.getQueueResourceQuotas()
+            .getConfiguredMinResource(p);
+        if (!minEffRes1.equals(Resources.none())
+            && !minEffRes2.equals(Resources.none())) {
+          return minEffRes2.compareTo(minEffRes1);
+        }
+
         float abs1 = q1.getQueueCapacities().getAbsoluteCapacity(p);
         float abs2 = q2.getQueueCapacities().getAbsoluteCapacity(p);
         return Float.compare(abs2, abs1);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/95a81934/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerQueueInfo.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerQueueInfo.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerQueueInfo.java
index 22705cc..86b2fea 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerQueueInfo.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerQueueInfo.java
@@ -62,6 +62,8 @@ public class CapacitySchedulerQueueInfo {
   protected long pendingContainers;
   protected QueueCapacitiesInfo capacities;
   protected ResourcesInfo resources;
+  protected ResourceInfo minEffectiveCapacity;
+  protected ResourceInfo maxEffectiveCapacity;
 
   CapacitySchedulerQueueInfo() {
   };
@@ -105,6 +107,11 @@ public class CapacitySchedulerQueueInfo {
 
     ResourceUsage queueResourceUsage = q.getQueueResourceUsage();
     populateQueueResourceUsage(queueResourceUsage);
+
+    minEffectiveCapacity = new ResourceInfo(
+        q.getQueueResourceQuotas().getEffectiveMinResource());
+    maxEffectiveCapacity = new ResourceInfo(
+        q.getQueueResourceQuotas().getEffectiveMaxResource());
   }
 
   protected void populateQueueResourceUsage(ResourceUsage queueResourceUsage) {
@@ -200,4 +207,12 @@ public class CapacitySchedulerQueueInfo {
   public ResourcesInfo getResources() {
     return resources;
   }
+
+  public ResourceInfo getMinEffectiveCapacity(){
+    return minEffectiveCapacity;
+  }
+
+  public ResourceInfo getMaxEffectiveCapacity(){
+    return maxEffectiveCapacity;
+  }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/95a81934/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockNM.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockNM.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockNM.java
index 2d76127..30cb8d3 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockNM.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockNM.java
@@ -39,6 +39,7 @@ import org.apache.hadoop.yarn.server.api.protocolrecords.NodeHeartbeatRequest;
 import org.apache.hadoop.yarn.server.api.protocolrecords.NodeHeartbeatResponse;
 import org.apache.hadoop.yarn.server.api.protocolrecords.RegisterNodeManagerRequest;
 import org.apache.hadoop.yarn.server.api.protocolrecords.RegisterNodeManagerResponse;
+import org.apache.hadoop.yarn.server.api.protocolrecords.UnRegisterNodeManagerRequest;
 import org.apache.hadoop.yarn.server.api.records.MasterKey;
 import org.apache.hadoop.yarn.server.api.records.NodeHealthStatus;
 import org.apache.hadoop.yarn.server.api.records.NodeStatus;
@@ -117,6 +118,13 @@ public class MockNM {
         true, ++responseId);
   }
 
+  public void unRegisterNode() throws Exception {
+    UnRegisterNodeManagerRequest request = Records
+        .newRecord(UnRegisterNodeManagerRequest.class);
+    request.setNodeId(nodeId);
+    resourceTracker.unRegisterNodeManager(request);
+  }
+
   public RegisterNodeManagerResponse registerNode() throws Exception {
     return registerNode(null, null);
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/95a81934/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockRM.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockRM.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockRM.java
index e967807..4ccbb92 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockRM.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockRM.java
@@ -803,6 +803,12 @@ public class MockRM extends ResourceManager {
     return rmApp;
   }
 
+  public MockNM unRegisterNode(MockNM nm) throws Exception {
+    nm.unRegisterNode();
+    drainEventsImplicitly();
+    return nm;
+  }
+
   public MockNM registerNode(String nodeIdStr, int memory) throws Exception {
     MockNM nm = new MockNM(nodeIdStr, memory, getResourceTrackerService());
     nm.registerNode();

http://git-wip-us.apache.org/repos/asf/hadoop/blob/95a81934/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/ProportionalCapacityPreemptionPolicyMockFramework.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/ProportionalCapacityPreemptionPolicyMockFramework.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/ProportionalCapacityPreemptionPolicyMockFramework.java
index 4fc0ea4..591d5f3 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/ProportionalCapacityPreemptionPolicyMockFramework.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/ProportionalCapacityPreemptionPolicyMockFramework.java
@@ -34,6 +34,7 @@ import org.apache.hadoop.yarn.server.resourcemanager.RMContext;
 import org.apache.hadoop.yarn.server.resourcemanager.nodelabels.RMNodeLabelsManager;
 import org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainer;
 import org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl;
+import org.apache.hadoop.yarn.server.resourcemanager.scheduler.QueueResourceQuotas;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceUsage;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.policy.QueueOrderingPolicy;
@@ -641,9 +642,11 @@ public class ProportionalCapacityPreemptionPolicyMockFramework {
 
     QueueCapacities qc = new QueueCapacities(0 == myLevel);
     ResourceUsage ru = new ResourceUsage();
+    QueueResourceQuotas qr  = new QueueResourceQuotas();
 
     when(queue.getQueueCapacities()).thenReturn(qc);
     when(queue.getQueueResourceUsage()).thenReturn(ru);
+    when(queue.getQueueResourceQuotas()).thenReturn(qr);
 
     LOG.debug("Setup queue, name=" + queue.getQueueName() + " path="
         + queue.getQueuePath());
@@ -676,7 +679,17 @@ public class ProportionalCapacityPreemptionPolicyMockFramework {
       qc.setAbsoluteMaximumCapacity(partitionName, absMax);
       qc.setAbsoluteUsedCapacity(partitionName, absUsed);
       qc.setUsedCapacity(partitionName, used);
+      qr.setEffectiveMaxResource(parseResourceFromString(values[1].trim()));
+      qr.setEffectiveMinResource(parseResourceFromString(values[0].trim()));
+      qr.setEffectiveMaxResource(partitionName,
+          parseResourceFromString(values[1].trim()));
+      qr.setEffectiveMinResource(partitionName,
+          parseResourceFromString(values[0].trim()));
       when(queue.getUsedCapacity()).thenReturn(used);
+      when(queue.getEffectiveCapacity(partitionName))
+          .thenReturn(parseResourceFromString(values[0].trim()));
+      when(queue.getEffectiveMaxCapacity(partitionName))
+          .thenReturn(parseResourceFromString(values[1].trim()));
       ru.setPending(partitionName, pending);
       // Setup reserved resource if it contained by input config
       Resource reserved = Resources.none();

http://git-wip-us.apache.org/repos/asf/hadoop/blob/95a81934/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TestProportionalCapacityPreemptionPolicy.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TestProportionalCapacityPreemptionPolicy.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TestProportionalCapacityPreemptionPolicy.java
index a14a2b1..b881323 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TestProportionalCapacityPreemptionPolicy.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TestProportionalCapacityPreemptionPolicy.java
@@ -35,6 +35,7 @@ import org.apache.hadoop.yarn.server.resourcemanager.RMContext;
 import org.apache.hadoop.yarn.server.resourcemanager.monitor.SchedulingMonitor;
 import org.apache.hadoop.yarn.server.resourcemanager.nodelabels.RMNodeLabelsManager;
 import org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainer;
+import org.apache.hadoop.yarn.server.resourcemanager.scheduler.QueueResourceQuotas;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceUsage;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.policy.QueueOrderingPolicy;
 import org.apache.hadoop.yarn.server.scheduler.SchedulerRequestKey;
@@ -48,7 +49,6 @@ import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.preempti
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.ContainerPreemptEvent;
-import org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.SchedulerEvent;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.SchedulerEventType;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.policy.OrderingPolicy;
 import org.apache.hadoop.yarn.util.Clock;
@@ -435,8 +435,8 @@ public class TestProportionalCapacityPreemptionPolicy {
     policy.editSchedule();
     // queueF(appD) wants resources, Verify that resources come from queueE(appC)
     // because it's a sibling and queueB(appA) because queueA is over capacity.
-    verify(mDisp, times(28)).handle(argThat(new IsPreemptionRequestFor(appA)));
-    verify(mDisp, times(22)).handle(argThat(new IsPreemptionRequestFor(appC)));
+    verify(mDisp, times(27)).handle(argThat(new IsPreemptionRequestFor(appA)));
+    verify(mDisp, times(23)).handle(argThat(new IsPreemptionRequestFor(appC)));
 
     // Need to call setup() again to reset mDisp
     setup();
@@ -1170,6 +1170,17 @@ public class TestProportionalCapacityPreemptionPolicy {
     when(root.getQueuePath()).thenReturn(CapacitySchedulerConfiguration.ROOT);
     boolean preemptionDisabled = mockPreemptionStatus("root");
     when(root.getPreemptionDisabled()).thenReturn(preemptionDisabled);
+    QueueResourceQuotas rootQr = new QueueResourceQuotas();
+    rootQr.setEffectiveMaxResource(Resource.newInstance(maxCap[0], 0));
+    rootQr.setEffectiveMinResource(abs[0]);
+    rootQr.setEffectiveMaxResource(RMNodeLabelsManager.NO_LABEL,
+        Resource.newInstance(maxCap[0], 0));
+    rootQr.setEffectiveMinResource(RMNodeLabelsManager.NO_LABEL, abs[0]);
+    when(root.getQueueResourceQuotas()).thenReturn(rootQr);
+    when(root.getEffectiveCapacity(RMNodeLabelsManager.NO_LABEL))
+        .thenReturn(abs[0]);
+    when(root.getEffectiveMaxCapacity(RMNodeLabelsManager.NO_LABEL))
+        .thenReturn(Resource.newInstance(maxCap[0], 0));
 
     for (int i = 1; i < queues.length; ++i) {
       final CSQueue q;
@@ -1200,6 +1211,18 @@ public class TestProportionalCapacityPreemptionPolicy {
       qc.setAbsoluteMaximumCapacity(maxCap[i] / (float) tot.getMemorySize());
       when(q.getQueueCapacities()).thenReturn(qc);
 
+      QueueResourceQuotas qr = new QueueResourceQuotas();
+      qr.setEffectiveMaxResource(Resource.newInstance(maxCap[i], 0));
+      qr.setEffectiveMinResource(abs[i]);
+      qr.setEffectiveMaxResource(RMNodeLabelsManager.NO_LABEL,
+          Resource.newInstance(maxCap[i], 0));
+      qr.setEffectiveMinResource(RMNodeLabelsManager.NO_LABEL, abs[i]);
+      when(q.getQueueResourceQuotas()).thenReturn(qr);
+      when(q.getEffectiveCapacity(RMNodeLabelsManager.NO_LABEL))
+          .thenReturn(abs[i]);
+      when(q.getEffectiveMaxCapacity(RMNodeLabelsManager.NO_LABEL))
+          .thenReturn(Resource.newInstance(maxCap[i], 0));
+
       String parentPathName = p.getQueuePath();
       parentPathName = (parentPathName == null) ? "root" : parentPathName;
       String queuePathName = (parentPathName + "." + queueName).replace("/",

http://git-wip-us.apache.org/repos/asf/hadoop/blob/95a81934/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TestProportionalCapacityPreemptionPolicyIntraQueueWithDRF.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TestProportionalCapacityPreemptionPolicyIntraQueueWithDRF.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TestProportionalCapacityPreemptionPolicyIntraQueueWithDRF.java
index 7784549..a1d89d7 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TestProportionalCapacityPreemptionPolicyIntraQueueWithDRF.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TestProportionalCapacityPreemptionPolicyIntraQueueWithDRF.java
@@ -67,9 +67,9 @@ public class TestProportionalCapacityPreemptionPolicyIntraQueueWithDRF
     conf.set(CapacitySchedulerConfiguration.INTRAQUEUE_PREEMPTION_ORDER_POLICY,
         "priority_first");
 
-    String labelsConfig = "=100:200,true;";
+    String labelsConfig = "=100:50,true;";
     String nodesConfig = // n1 has no label
-        "n1= res=100:200";
+        "n1= res=100:50";
     String queuesConfig =
         // guaranteed,max,used,pending,reserved
         "root(=[100:50 100:50 80:40 120:60 0]);" + // root
@@ -105,7 +105,7 @@ public class TestProportionalCapacityPreemptionPolicyIntraQueueWithDRF
     verify(mDisp, times(1)).handle(argThat(
         new TestProportionalCapacityPreemptionPolicy.IsPreemptionRequestFor(
             getAppAttemptId(4))));
-    verify(mDisp, times(7)).handle(argThat(
+    verify(mDisp, times(3)).handle(argThat(
         new TestProportionalCapacityPreemptionPolicy.IsPreemptionRequestFor(
             getAppAttemptId(3))));
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/95a81934/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestAbsoluteResourceConfiguration.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestAbsoluteResourceConfiguration.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestAbsoluteResourceConfiguration.java
new file mode 100644
index 0000000..5a66281
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestAbsoluteResourceConfiguration.java
@@ -0,0 +1,516 @@
+/**
+* Licensed to the Apache Software Foundation (ASF) under one
+* or more contributor license agreements.  See the NOTICE file
+* distributed with this work for additional information
+* regarding copyright ownership.  The ASF licenses this file
+* to you under the Apache License, Version 2.0 (the
+* "License"); you may not use this file except in compliance
+* with the License.  You may obtain a copy of the License at
+*
+*     http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity;
+
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.HashSet;
+import java.util.Set;
+
+import org.apache.hadoop.yarn.api.records.Resource;
+import org.apache.hadoop.yarn.conf.YarnConfiguration;
+import org.apache.hadoop.yarn.server.resourcemanager.MockNM;
+import org.apache.hadoop.yarn.server.resourcemanager.MockRM;
+import org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceScheduler;
+import org.apache.hadoop.yarn.util.resource.Resources;
+import org.junit.Assert;
+import org.junit.Test;
+
+public class TestAbsoluteResourceConfiguration {
+
+  private static final int GB = 1024;
+
+  private static final String QUEUEA = "queueA";
+  private static final String QUEUEB = "queueB";
+  private static final String QUEUEC = "queueC";
+  private static final String QUEUEA1 = "queueA1";
+  private static final String QUEUEA2 = "queueA2";
+  private static final String QUEUEB1 = "queueB1";
+
+  private static final String QUEUEA_FULL = CapacitySchedulerConfiguration.ROOT
+      + "." + QUEUEA;
+  private static final String QUEUEB_FULL = CapacitySchedulerConfiguration.ROOT
+      + "." + QUEUEB;
+  private static final String QUEUEC_FULL = CapacitySchedulerConfiguration.ROOT
+      + "." + QUEUEC;
+  private static final String QUEUEA1_FULL = QUEUEA_FULL + "." + QUEUEA1;
+  private static final String QUEUEA2_FULL = QUEUEA_FULL + "." + QUEUEA2;
+  private static final String QUEUEB1_FULL = QUEUEB_FULL + "." + QUEUEB1;
+
+  private static final Resource QUEUE_A_MINRES = Resource.newInstance(100 * GB,
+      10);
+  private static final Resource QUEUE_A_MAXRES = Resource.newInstance(200 * GB,
+      30);
+  private static final Resource QUEUE_A1_MINRES = Resource.newInstance(50 * GB,
+      5);
+  private static final Resource QUEUE_A2_MINRES = Resource.newInstance(50 * GB,
+      5);
+  private static final Resource QUEUE_B_MINRES = Resource.newInstance(50 * GB,
+      10);
+  private static final Resource QUEUE_B1_MINRES = Resource.newInstance(40 * GB,
+      10);
+  private static final Resource QUEUE_B_MAXRES = Resource.newInstance(150 * GB,
+      30);
+  private static final Resource QUEUE_C_MINRES = Resource.newInstance(50 * GB,
+      10);
+  private static final Resource QUEUE_C_MAXRES = Resource.newInstance(150 * GB,
+      20);
+  private static final Resource QUEUEA_REDUCED = Resource.newInstance(64000, 6);
+  private static final Resource QUEUEB_REDUCED = Resource.newInstance(32000, 6);
+  private static final Resource QUEUEC_REDUCED = Resource.newInstance(32000, 6);
+  private static final Resource QUEUEMAX_REDUCED = Resource.newInstance(128000,
+      20);
+
+  private static Set<String> resourceTypes = new HashSet<>(
+      Arrays.asList("memory", "vcores"));
+
+  private CapacitySchedulerConfiguration setupSimpleQueueConfiguration(
+      boolean isCapacityNeeded) {
+    CapacitySchedulerConfiguration csConf = new CapacitySchedulerConfiguration();
+    csConf.setQueues(CapacitySchedulerConfiguration.ROOT,
+        new String[]{QUEUEA, QUEUEB, QUEUEC});
+
+    // Set default capacities like normal configuration.
+    if (isCapacityNeeded) {
+      csConf.setCapacity(QUEUEA_FULL, 50f);
+      csConf.setCapacity(QUEUEB_FULL, 25f);
+      csConf.setCapacity(QUEUEC_FULL, 25f);
+    }
+
+    return csConf;
+  }
+
+  private CapacitySchedulerConfiguration setupComplexQueueConfiguration(
+      boolean isCapacityNeeded) {
+    CapacitySchedulerConfiguration csConf = new CapacitySchedulerConfiguration();
+    csConf.setQueues(CapacitySchedulerConfiguration.ROOT,
+        new String[]{QUEUEA, QUEUEB, QUEUEC});
+    csConf.setQueues(QUEUEA_FULL, new String[]{QUEUEA1, QUEUEA2});
+    csConf.setQueues(QUEUEB_FULL, new String[]{QUEUEB1});
+
+    // Set default capacities like normal configuration.
+    if (isCapacityNeeded) {
+      csConf.setCapacity(QUEUEA_FULL, 50f);
+      csConf.setCapacity(QUEUEB_FULL, 25f);
+      csConf.setCapacity(QUEUEC_FULL, 25f);
+      csConf.setCapacity(QUEUEA1_FULL, 50f);
+      csConf.setCapacity(QUEUEA2_FULL, 50f);
+      csConf.setCapacity(QUEUEB1_FULL, 100f);
+    }
+
+    return csConf;
+  }
+
+  private CapacitySchedulerConfiguration setupMinMaxResourceConfiguration(
+      CapacitySchedulerConfiguration csConf) {
+    // Update min/max resource to queueA/B/C
+    csConf.setMinimumResourceRequirement("", QUEUEA_FULL, QUEUE_A_MINRES);
+    csConf.setMinimumResourceRequirement("", QUEUEB_FULL, QUEUE_B_MINRES);
+    csConf.setMinimumResourceRequirement("", QUEUEC_FULL, QUEUE_C_MINRES);
+
+    csConf.setMaximumResourceRequirement("", QUEUEA_FULL, QUEUE_A_MAXRES);
+    csConf.setMaximumResourceRequirement("", QUEUEB_FULL, QUEUE_B_MAXRES);
+    csConf.setMaximumResourceRequirement("", QUEUEC_FULL, QUEUE_C_MAXRES);
+
+    return csConf;
+  }
+
+  private CapacitySchedulerConfiguration setupComplexMinMaxResourceConfig(
+      CapacitySchedulerConfiguration csConf) {
+    // Update min/max resource to queueA/B/C
+    csConf.setMinimumResourceRequirement("", QUEUEA_FULL, QUEUE_A_MINRES);
+    csConf.setMinimumResourceRequirement("", QUEUEB_FULL, QUEUE_B_MINRES);
+    csConf.setMinimumResourceRequirement("", QUEUEC_FULL, QUEUE_C_MINRES);
+    csConf.setMinimumResourceRequirement("", QUEUEA1_FULL, QUEUE_A1_MINRES);
+    csConf.setMinimumResourceRequirement("", QUEUEA2_FULL, QUEUE_A2_MINRES);
+    csConf.setMinimumResourceRequirement("", QUEUEB1_FULL, QUEUE_B1_MINRES);
+
+    csConf.setMaximumResourceRequirement("", QUEUEA_FULL, QUEUE_A_MAXRES);
+    csConf.setMaximumResourceRequirement("", QUEUEB_FULL, QUEUE_B_MAXRES);
+    csConf.setMaximumResourceRequirement("", QUEUEC_FULL, QUEUE_C_MAXRES);
+
+    return csConf;
+  }
+
+  @Test
+  public void testSimpleMinMaxResourceConfigurartionPerQueue() {
+
+    CapacitySchedulerConfiguration csConf = setupSimpleQueueConfiguration(true);
+    setupMinMaxResourceConfiguration(csConf);
+
+    Assert.assertEquals("Min resource configured for QUEUEA is not correct",
+        QUEUE_A_MINRES,
+        csConf.getMinimumResourceRequirement("", QUEUEA_FULL, resourceTypes));
+    Assert.assertEquals("Max resource configured for QUEUEA is not correct",
+        QUEUE_A_MAXRES,
+        csConf.getMaximumResourceRequirement("", QUEUEA_FULL, resourceTypes));
+    Assert.assertEquals("Min resource configured for QUEUEB is not correct",
+        QUEUE_B_MINRES,
+        csConf.getMinimumResourceRequirement("", QUEUEB_FULL, resourceTypes));
+    Assert.assertEquals("Max resource configured for QUEUEB is not correct",
+        QUEUE_B_MAXRES,
+        csConf.getMaximumResourceRequirement("", QUEUEB_FULL, resourceTypes));
+    Assert.assertEquals("Min resource configured for QUEUEC is not correct",
+        QUEUE_C_MINRES,
+        csConf.getMinimumResourceRequirement("", QUEUEC_FULL, resourceTypes));
+    Assert.assertEquals("Max resource configured for QUEUEC is not correct",
+        QUEUE_C_MAXRES,
+        csConf.getMaximumResourceRequirement("", QUEUEC_FULL, resourceTypes));
+  }
+
+  @Test
+  public void testEffectiveMinMaxResourceConfigurartionPerQueue()
+      throws Exception {
+    // create conf with basic queue configuration.
+    CapacitySchedulerConfiguration csConf = setupSimpleQueueConfiguration(
+        false);
+    setupMinMaxResourceConfiguration(csConf);
+
+    csConf.setClass(YarnConfiguration.RM_SCHEDULER, CapacityScheduler.class,
+        ResourceScheduler.class);
+
+    @SuppressWarnings("resource")
+    MockRM rm = new MockRM(csConf);
+    rm.start();
+
+    // Add few nodes
+    rm.registerNode("127.0.0.1:1234", 250 * GB, 40);
+
+    // Get queue object to verify min/max resource configuration.
+    CapacityScheduler cs = (CapacityScheduler) rm.getResourceScheduler();
+
+    LeafQueue qA = (LeafQueue) cs.getQueue(QUEUEA);
+    Assert.assertNotNull(qA);
+    Assert.assertEquals("Min resource configured for QUEUEA is not correct",
+        QUEUE_A_MINRES, qA.queueResourceQuotas.getConfiguredMinResource());
+    Assert.assertEquals("Max resource configured for QUEUEA is not correct",
+        QUEUE_A_MAXRES, qA.queueResourceQuotas.getConfiguredMaxResource());
+    Assert.assertEquals("Effective Min resource for QUEUEA is not correct",
+        QUEUE_A_MINRES, qA.queueResourceQuotas.getEffectiveMinResource());
+    Assert.assertEquals("Effective Max resource for QUEUEA is not correct",
+        QUEUE_A_MAXRES, qA.queueResourceQuotas.getEffectiveMaxResource());
+
+    LeafQueue qB = (LeafQueue) cs.getQueue(QUEUEB);
+    Assert.assertNotNull(qB);
+    Assert.assertEquals("Min resource configured for QUEUEB is not correct",
+        QUEUE_B_MINRES, qB.queueResourceQuotas.getConfiguredMinResource());
+    Assert.assertEquals("Max resource configured for QUEUEB is not correct",
+        QUEUE_B_MAXRES, qB.queueResourceQuotas.getConfiguredMaxResource());
+    Assert.assertEquals("Effective Min resource for QUEUEB is not correct",
+        QUEUE_B_MINRES, qB.queueResourceQuotas.getEffectiveMinResource());
+    Assert.assertEquals("Effective Max resource for QUEUEB is not correct",
+        QUEUE_B_MAXRES, qB.queueResourceQuotas.getEffectiveMaxResource());
+
+    LeafQueue qC = (LeafQueue) cs.getQueue(QUEUEC);
+    Assert.assertNotNull(qC);
+    Assert.assertEquals("Min resource configured for QUEUEC is not correct",
+        QUEUE_C_MINRES, qC.queueResourceQuotas.getConfiguredMinResource());
+    Assert.assertEquals("Max resource configured for QUEUEC is not correct",
+        QUEUE_C_MAXRES, qC.queueResourceQuotas.getConfiguredMaxResource());
+    Assert.assertEquals("Effective Min resource for QUEUEC is not correct",
+        QUEUE_C_MINRES, qC.queueResourceQuotas.getEffectiveMinResource());
+    Assert.assertEquals("Effective Max resource for QUEUEC is not correct",
+        QUEUE_C_MAXRES, qC.queueResourceQuotas.getEffectiveMaxResource());
+
+    rm.stop();
+  }
+
+  @Test
+  public void testSimpleValidateAbsoluteResourceConfig() throws Exception {
+    /**
+     * Queue structure is as follows. root / | \ a b c / \ | a1 a2 b1
+     *
+     * Test below cases 1) Configure percentage based capacity and absolute
+     * resource together. 2) As per above tree structure, ensure all values
+     * could be retrieved. 3) Validate whether min resource cannot be more than
+     * max resources. 4) Validate whether max resource of queue cannot be more
+     * than its parent max resource.
+     */
+    // create conf with basic queue configuration.
+    CapacitySchedulerConfiguration csConf = setupSimpleQueueConfiguration(
+        false);
+    setupMinMaxResourceConfiguration(csConf);
+    csConf.setClass(YarnConfiguration.RM_SCHEDULER, CapacityScheduler.class,
+        ResourceScheduler.class);
+
+    @SuppressWarnings("resource")
+    MockRM rm = new MockRM(csConf);
+    rm.start();
+
+    // Add few nodes
+    rm.registerNode("127.0.0.1:1234", 250 * GB, 40);
+
+    // Get queue object to verify min/max resource configuration.
+    CapacityScheduler cs = (CapacityScheduler) rm.getResourceScheduler();
+
+    // 1. Create a new config with capcity and min/max together. Ensure an
+    // exception is thrown.
+    CapacitySchedulerConfiguration csConf1 = setupSimpleQueueConfiguration(
+        true);
+    setupMinMaxResourceConfiguration(csConf1);
+
+    try {
+      cs.reinitialize(csConf1, rm.getRMContext());
+      Assert.fail();
+    } catch (IOException e) {
+      Assert.assertTrue(e instanceof IOException);
+      Assert.assertEquals(
+          "Failed to re-init queues : Queue 'queueA' should use either"
+              + " percentage based capacity configuration or absolute resource.",
+          e.getMessage());
+    }
+    rm.stop();
+
+    // 2. Create a new config with min/max alone with a complex queue config.
+    // Check all values could be fetched correctly.
+    CapacitySchedulerConfiguration csConf2 = setupComplexQueueConfiguration(
+        false);
+    setupComplexMinMaxResourceConfig(csConf2);
+
+    rm = new MockRM(csConf2);
+    rm.start();
+    rm.registerNode("127.0.0.1:1234", 250 * GB, 40);
+    cs = (CapacityScheduler) rm.getResourceScheduler();
+
+    LeafQueue qA1 = (LeafQueue) cs.getQueue(QUEUEA1);
+    Assert.assertEquals("Effective Min resource for QUEUEA1 is not correct",
+        QUEUE_A1_MINRES, qA1.queueResourceQuotas.getEffectiveMinResource());
+    Assert.assertEquals("Effective Max resource for QUEUEA1 is not correct",
+        QUEUE_A_MAXRES, qA1.queueResourceQuotas.getEffectiveMaxResource());
+
+    LeafQueue qA2 = (LeafQueue) cs.getQueue(QUEUEA2);
+    Assert.assertEquals("Effective Min resource for QUEUEA2 is not correct",
+        QUEUE_A2_MINRES, qA2.queueResourceQuotas.getEffectiveMinResource());
+    Assert.assertEquals("Effective Max resource for QUEUEA2 is not correct",
+        QUEUE_A_MAXRES, qA2.queueResourceQuotas.getEffectiveMaxResource());
+
+    LeafQueue qB1 = (LeafQueue) cs.getQueue(QUEUEB1);
+    Assert.assertNotNull(qB1);
+    Assert.assertEquals("Min resource configured for QUEUEB1 is not correct",
+        QUEUE_B1_MINRES, qB1.queueResourceQuotas.getConfiguredMinResource());
+    Assert.assertEquals("Max resource configured for QUEUEB1 is not correct",
+        QUEUE_B_MAXRES, qB1.queueResourceQuotas.getConfiguredMaxResource());
+    Assert.assertEquals("Effective Min resource for QUEUEB1 is not correct",
+        QUEUE_B1_MINRES, qB1.queueResourceQuotas.getEffectiveMinResource());
+    Assert.assertEquals("Effective Max resource for QUEUEB1 is not correct",
+        QUEUE_B_MAXRES, qB1.queueResourceQuotas.getEffectiveMaxResource());
+
+    LeafQueue qC = (LeafQueue) cs.getQueue(QUEUEC);
+    Assert.assertNotNull(qC);
+    Assert.assertEquals("Min resource configured for QUEUEC is not correct",
+        QUEUE_C_MINRES, qC.queueResourceQuotas.getConfiguredMinResource());
+    Assert.assertEquals("Max resource configured for QUEUEC is not correct",
+        QUEUE_C_MAXRES, qC.queueResourceQuotas.getConfiguredMaxResource());
+    Assert.assertEquals("Effective Min resource for QUEUEC is not correct",
+        QUEUE_C_MINRES, qC.queueResourceQuotas.getEffectiveMinResource());
+    Assert.assertEquals("Effective Max resource for QUEUEC is not correct",
+        QUEUE_C_MAXRES, qC.queueResourceQuotas.getEffectiveMaxResource());
+
+    // 3. Create a new config and make sure one queue's min resource is more
+    // than its max resource configured.
+    CapacitySchedulerConfiguration csConf3 = setupComplexQueueConfiguration(
+        false);
+    setupComplexMinMaxResourceConfig(csConf3);
+
+    csConf3.setMinimumResourceRequirement("", QUEUEB1_FULL, QUEUE_B_MAXRES);
+    csConf3.setMaximumResourceRequirement("", QUEUEB1_FULL, QUEUE_B1_MINRES);
+
+    try {
+      cs.reinitialize(csConf3, rm.getRMContext());
+      Assert.fail();
+    } catch (IOException e) {
+      Assert.assertTrue(e instanceof IOException);
+      Assert.assertEquals(
+          "Failed to re-init queues : Min resource configuration "
+              + "<memory:153600, vCores:30> is greater than its "
+              + "max value:<memory:40960, vCores:10> in queue:queueB1",
+          e.getMessage());
+    }
+
+    // 4. Create a new config and make sure one queue's max resource is more
+    // than its preant's max resource configured.
+    CapacitySchedulerConfiguration csConf4 = setupComplexQueueConfiguration(
+        false);
+    setupComplexMinMaxResourceConfig(csConf4);
+
+    csConf4.setMaximumResourceRequirement("", QUEUEB1_FULL, QUEUE_A_MAXRES);
+
+    try {
+      cs.reinitialize(csConf4, rm.getRMContext());
+      Assert.fail();
+    } catch (IOException e) {
+      Assert.assertTrue(e instanceof IOException);
+      Assert
+          .assertEquals(
+              "Failed to re-init queues : Max resource configuration "
+                  + "<memory:204800, vCores:30> is greater than parents max value:"
+                  + "<memory:153600, vCores:30> in queue:queueB1",
+              e.getMessage());
+    }
+    rm.stop();
+  }
+
+  @Test
+  public void testComplexValidateAbsoluteResourceConfig() throws Exception {
+    /**
+     * Queue structure is as follows. root / | \ a b c / \ | a1 a2 b1
+     *
+     * Test below cases: 1) Parent and its child queues must use either
+     * percentage based or absolute resource configuration. 2) Parent's min
+     * resource must be more than sum of child's min resource.
+     */
+
+    // create conf with basic queue configuration.
+    CapacitySchedulerConfiguration csConf = setupComplexQueueConfiguration(
+        false);
+    setupComplexMinMaxResourceConfig(csConf);
+    csConf.setClass(YarnConfiguration.RM_SCHEDULER, CapacityScheduler.class,
+        ResourceScheduler.class);
+
+    @SuppressWarnings("resource")
+    MockRM rm = new MockRM(csConf);
+    rm.start();
+
+    // Add few nodes
+    rm.registerNode("127.0.0.1:1234", 250 * GB, 40);
+
+    // 1. Explicitly set percentage based config for parent queues. This will
+    // make Queue A,B and C with percentage based and A1,A2 or B1 with absolute
+    // resource.
+    csConf.setCapacity(QUEUEA_FULL, 50f);
+    csConf.setCapacity(QUEUEB_FULL, 25f);
+    csConf.setCapacity(QUEUEC_FULL, 25f);
+
+    // Also unset resource based config.
+    csConf.setMinimumResourceRequirement("", QUEUEA_FULL, Resources.none());
+    csConf.setMinimumResourceRequirement("", QUEUEB_FULL, Resources.none());
+    csConf.setMinimumResourceRequirement("", QUEUEC_FULL, Resources.none());
+
+    // Get queue object to verify min/max resource configuration.
+    CapacityScheduler cs = (CapacityScheduler) rm.getResourceScheduler();
+    try {
+      cs.reinitialize(csConf, rm.getRMContext());
+      Assert.fail();
+    } catch (IOException e) {
+      Assert.assertTrue(e instanceof IOException);
+      Assert.assertEquals(
+          "Failed to re-init queues : Parent queue 'queueA' "
+              + "and child queue 'queueA1' should use either percentage based"
+              + " capacity configuration or absolute resource together.",
+          e.getMessage());
+    }
+
+    // 2. Create a new config and make sure one queue's min resource is more
+    // than its max resource configured.
+    CapacitySchedulerConfiguration csConf1 = setupComplexQueueConfiguration(
+        false);
+    setupComplexMinMaxResourceConfig(csConf1);
+
+    // Configure QueueA with lesser resource than its children.
+    csConf1.setMinimumResourceRequirement("", QUEUEA_FULL, QUEUE_A1_MINRES);
+
+    try {
+      cs.reinitialize(csConf1, rm.getRMContext());
+      Assert.fail();
+    } catch (IOException e) {
+      Assert.assertTrue(e instanceof IOException);
+      Assert.assertEquals("Failed to re-init queues : Parent Queues capacity: "
+          + "<memory:51200, vCores:5> is less than to its children:"
+          + "<memory:102400, vCores:10> for queue:queueA", e.getMessage());
+    }
+  }
+
+  @Test
+  public void testEffectiveResourceAfterReducingClusterResource()
+      throws Exception {
+    // create conf with basic queue configuration.
+    CapacitySchedulerConfiguration csConf = setupSimpleQueueConfiguration(
+        false);
+    setupMinMaxResourceConfiguration(csConf);
+
+    csConf.setClass(YarnConfiguration.RM_SCHEDULER, CapacityScheduler.class,
+        ResourceScheduler.class);
+
+    @SuppressWarnings("resource")
+    MockRM rm = new MockRM(csConf);
+    rm.start();
+
+    // Add few nodes
+    MockNM nm1 = rm.registerNode("127.0.0.1:1234", 125 * GB, 20);
+    rm.registerNode("127.0.0.2:1234", 125 * GB, 20);
+
+    // Get queue object to verify min/max resource configuration.
+    CapacityScheduler cs = (CapacityScheduler) rm.getResourceScheduler();
+
+    LeafQueue qA = (LeafQueue) cs.getQueue(QUEUEA);
+    Assert.assertNotNull(qA);
+    Assert.assertEquals("Min resource configured for QUEUEA is not correct",
+        QUEUE_A_MINRES, qA.queueResourceQuotas.getConfiguredMinResource());
+    Assert.assertEquals("Max resource configured for QUEUEA is not correct",
+        QUEUE_A_MAXRES, qA.queueResourceQuotas.getConfiguredMaxResource());
+    Assert.assertEquals("Effective Min resource for QUEUEA is not correct",
+        QUEUE_A_MINRES, qA.queueResourceQuotas.getEffectiveMinResource());
+    Assert.assertEquals("Effective Max resource for QUEUEA is not correct",
+        QUEUE_A_MAXRES, qA.queueResourceQuotas.getEffectiveMaxResource());
+
+    LeafQueue qB = (LeafQueue) cs.getQueue(QUEUEB);
+    Assert.assertNotNull(qB);
+    Assert.assertEquals("Min resource configured for QUEUEB is not correct",
+        QUEUE_B_MINRES, qB.queueResourceQuotas.getConfiguredMinResource());
+    Assert.assertEquals("Max resource configured for QUEUEB is not correct",
+        QUEUE_B_MAXRES, qB.queueResourceQuotas.getConfiguredMaxResource());
+    Assert.assertEquals("Effective Min resource for QUEUEB is not correct",
+        QUEUE_B_MINRES, qB.queueResourceQuotas.getEffectiveMinResource());
+    Assert.assertEquals("Effective Max resource for QUEUEB is not correct",
+        QUEUE_B_MAXRES, qB.queueResourceQuotas.getEffectiveMaxResource());
+
+    LeafQueue qC = (LeafQueue) cs.getQueue(QUEUEC);
+    Assert.assertNotNull(qC);
+    Assert.assertEquals("Min resource configured for QUEUEC is not correct",
+        QUEUE_C_MINRES, qC.queueResourceQuotas.getConfiguredMinResource());
+    Assert.assertEquals("Max resource configured for QUEUEC is not correct",
+        QUEUE_C_MAXRES, qC.queueResourceQuotas.getConfiguredMaxResource());
+    Assert.assertEquals("Effective Min resource for QUEUEC is not correct",
+        QUEUE_C_MINRES, qC.queueResourceQuotas.getEffectiveMinResource());
+    Assert.assertEquals("Effective Max resource for QUEUEC is not correct",
+        QUEUE_C_MAXRES, qC.queueResourceQuotas.getEffectiveMaxResource());
+
+    // unregister one NM.
+    rm.unRegisterNode(nm1);
+
+    // After loosing one NM, effective min res of queueA will become just
+    // above half. Hence A's min will be 60Gi and 6 cores and max will be
+    // 128GB and 20 cores.
+    Assert.assertEquals("Effective Min resource for QUEUEA is not correct",
+        QUEUEA_REDUCED, qA.queueResourceQuotas.getEffectiveMinResource());
+    Assert.assertEquals("Effective Max resource for QUEUEA is not correct",
+        QUEUEMAX_REDUCED, qA.queueResourceQuotas.getEffectiveMaxResource());
+
+    Assert.assertEquals("Effective Min resource for QUEUEB is not correct",
+        QUEUEB_REDUCED, qB.queueResourceQuotas.getEffectiveMinResource());
+    Assert.assertEquals("Effective Max resource for QUEUEB is not correct",
+        QUEUEMAX_REDUCED, qB.queueResourceQuotas.getEffectiveMaxResource());
+
+    Assert.assertEquals("Effective Min resource for QUEUEC is not correct",
+        QUEUEC_REDUCED, qC.queueResourceQuotas.getEffectiveMinResource());
+    Assert.assertEquals("Effective Max resource for QUEUEC is not correct",
+        QUEUEMAX_REDUCED, qC.queueResourceQuotas.getEffectiveMaxResource());
+
+    rm.stop();
+  }
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/95a81934/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestApplicationLimits.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestApplicationLimits.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestApplicationLimits.java
index 8aca235..24ae244 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestApplicationLimits.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestApplicationLimits.java
@@ -60,6 +60,7 @@ import org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMApp;
 import org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppState;
 import org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttempt;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.ActiveUsersManager;
+import org.apache.hadoop.yarn.server.resourcemanager.scheduler.QueueResourceQuotas;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceLimits;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceScheduler;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceUsage;
@@ -86,6 +87,7 @@ public class TestApplicationLimits {
   final static int GB = 1024;
 
   LeafQueue queue;
+  CSQueue root;
   
   private final ResourceCalculator resourceCalculator = new DefaultResourceCalculator();
 
@@ -100,7 +102,7 @@ public class TestApplicationLimits {
     setupQueueConfiguration(csConf);
     
     rmContext = TestUtils.getMockRMContext();
-
+    Resource clusterResource = Resources.createResource(10 * 16 * GB, 10 * 32);
 
     CapacitySchedulerContext csContext = mock(CapacitySchedulerContext.class);
     when(csContext.getConfiguration()).thenReturn(csConf);
@@ -110,10 +112,11 @@ public class TestApplicationLimits {
     when(csContext.getMaximumResourceCapability()).
         thenReturn(Resources.createResource(16*GB, 32));
     when(csContext.getClusterResource()).
-        thenReturn(Resources.createResource(10 * 16 * GB, 10 * 32));
+        thenReturn(clusterResource);
     when(csContext.getResourceCalculator()).
         thenReturn(resourceCalculator);
     when(csContext.getRMContext()).thenReturn(rmContext);
+    when(csContext.getPreemptionManager()).thenReturn(new PreemptionManager());
     
     RMContainerTokenSecretManager containerTokenSecretManager =
         new RMContainerTokenSecretManager(conf);
@@ -122,13 +125,17 @@ public class TestApplicationLimits {
         containerTokenSecretManager);
 
     Map<String, CSQueue> queues = new HashMap<String, CSQueue>();
-    CSQueue root = CapacitySchedulerQueueManager
+    root = CapacitySchedulerQueueManager
         .parseQueue(csContext, csConf, null, "root",
             queues, queues,
             TestUtils.spyHook);
+    root.updateClusterResource(clusterResource,
+        new ResourceLimits(clusterResource));
 
-    
     queue = spy(new LeafQueue(csContext, A, root, null));
+    QueueResourceQuotas queueResourceQuotas = ((LeafQueue) queues.get(A))
+        .getQueueResourceQuotas();
+    doReturn(queueResourceQuotas).when(queue).getQueueResourceQuotas();
 
     // Stub out ACL checks
     doReturn(true).
@@ -189,6 +196,8 @@ public class TestApplicationLimits {
     // when there is only 1 user, and drops to 2G (the userlimit) when there
     // is a second user
     Resource clusterResource = Resource.newInstance(80 * GB, 40);
+    root.updateClusterResource(clusterResource, new ResourceLimits(
+        clusterResource));
     queue.updateClusterResource(clusterResource, new ResourceLimits(
         clusterResource));
     
@@ -287,6 +296,8 @@ public class TestApplicationLimits {
     CSQueue root = 
         CapacitySchedulerQueueManager.parseQueue(csContext, csConf, null,
             "root", queues, queues, TestUtils.spyHook);
+    root.updateClusterResource(clusterResource,
+        new ResourceLimits(clusterResource));
 
     LeafQueue queue = (LeafQueue)queues.get(A);
     
@@ -357,6 +368,8 @@ public class TestApplicationLimits {
         csContext, csConf, null, "root",
         queues, queues, TestUtils.spyHook);
     clusterResource = Resources.createResource(100 * 16 * GB);
+    root.updateClusterResource(clusterResource, new ResourceLimits(
+        clusterResource));
 
     queue = (LeafQueue)queues.get(A);
 
@@ -378,6 +391,8 @@ public class TestApplicationLimits {
     root = CapacitySchedulerQueueManager.parseQueue(
         csContext, csConf, null, "root",
         queues, queues, TestUtils.spyHook);
+    root.updateClusterResource(clusterResource, new ResourceLimits(
+        clusterResource));
 
     queue = (LeafQueue)queues.get(A);
     assertEquals(9999, (int)csConf.getMaximumApplicationsPerQueue(queue.getQueuePath()));
@@ -393,7 +408,7 @@ public class TestApplicationLimits {
     final String user_0 = "user_0";
     final String user_1 = "user_1";
     final String user_2 = "user_2";
-    
+
     assertEquals(Resource.newInstance(16 * GB, 1),
         queue.calculateAndGetAMResourceLimit());
     assertEquals(Resource.newInstance(8 * GB, 1),
@@ -578,6 +593,7 @@ public class TestApplicationLimits {
         thenReturn(Resources.createResource(16*GB));
     when(csContext.getResourceCalculator()).thenReturn(resourceCalculator);
     when(csContext.getRMContext()).thenReturn(rmContext);
+    when(csContext.getPreemptionManager()).thenReturn(new PreemptionManager());
     
     // Say cluster has 100 nodes of 16G each
     Resource clusterResource = Resources.createResource(100 * 16 * GB);
@@ -586,6 +602,8 @@ public class TestApplicationLimits {
     Map<String, CSQueue> queues = new HashMap<String, CSQueue>();
     CSQueue rootQueue = CapacitySchedulerQueueManager.parseQueue(csContext,
         csConf, null, "root", queues, queues, TestUtils.spyHook);
+    rootQueue.updateClusterResource(clusterResource,
+        new ResourceLimits(clusterResource));
 
     ResourceUsage queueCapacities = rootQueue.getQueueResourceUsage();
     when(csContext.getClusterResourceUsage())
@@ -693,6 +711,8 @@ public class TestApplicationLimits {
 
     // Now reduce cluster size and check for the smaller headroom
     clusterResource = Resources.createResource(90*16*GB);
+    rootQueue.updateClusterResource(clusterResource,
+        new ResourceLimits(clusterResource));
 
     // Any change is cluster resource needs to enforce user-limit recomputation.
     // In existing code, LeafQueue#updateClusterResource handled this. However

http://git-wip-us.apache.org/repos/asf/hadoop/blob/95a81934/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestApplicationLimitsByPartition.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestApplicationLimitsByPartition.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestApplicationLimitsByPartition.java
index 0aac2ef..d73f1c8 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestApplicationLimitsByPartition.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestApplicationLimitsByPartition.java
@@ -54,6 +54,7 @@ import org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceLimits;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceScheduler;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceUsage;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt.AMState;
+import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.preemption.PreemptionManager;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode;
 import org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator;
@@ -600,6 +601,7 @@ public class TestApplicationLimitsByPartition {
     RMContext spyRMContext = spy(rmContext);
     when(spyRMContext.getNodeLabelManager()).thenReturn(mgr);
     when(csContext.getRMContext()).thenReturn(spyRMContext);
+    when(csContext.getPreemptionManager()).thenReturn(new PreemptionManager());
 
     mgr.activateNode(NodeId.newInstance("h0", 0),
         Resource.newInstance(160 * GB, 16)); // default Label
@@ -615,6 +617,8 @@ public class TestApplicationLimitsByPartition {
     Map<String, CSQueue> queues = new HashMap<String, CSQueue>();
     CSQueue rootQueue = CapacitySchedulerQueueManager.parseQueue(csContext,
         csConf, null, "root", queues, queues, TestUtils.spyHook);
+    rootQueue.updateClusterResource(clusterResource,
+        new ResourceLimits(clusterResource));
 
     ResourceUsage queueResUsage = rootQueue.getQueueResourceUsage();
     when(csContext.getClusterResourceUsage())

http://git-wip-us.apache.org/repos/asf/hadoop/blob/95a81934/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java
index 64e0df4..cc9a3d4 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java
@@ -4268,7 +4268,7 @@ public class TestCapacityScheduler {
           null, null, NULL_UPDATE_REQUESTS);
       CapacityScheduler.schedule(cs);
     }
-    assertEquals("P2 Used Resource should be 8 GB", 8 * GB,
+    assertEquals("P2 Used Resource should be 7 GB", 7 * GB,
         cs.getQueue("p2").getUsedResources().getMemorySize());
 
     //Free a container from X1

http://git-wip-us.apache.org/repos/asf/hadoop/blob/95a81934/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestChildQueueOrder.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestChildQueueOrder.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestChildQueueOrder.java
index e34665d..b6b0361 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestChildQueueOrder.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestChildQueueOrder.java
@@ -242,6 +242,8 @@ public class TestChildQueueOrder {
       Resources.createResource(numNodes * (memoryPerNode*GB), 
           numNodes * coresPerNode);
     when(csContext.getNumClusterNodes()).thenReturn(numNodes);
+    root.updateClusterResource(clusterResource,
+        new ResourceLimits(clusterResource));
 
     // Start testing
     CSQueue a = queues.get(A);


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[29/50] [abbrv] hadoop git commit: HADOOP-14628. Upgrade maven enforcer plugin to 3.0.0-M1.

Posted by wa...@apache.org.
HADOOP-14628. Upgrade maven enforcer plugin to 3.0.0-M1.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ebabc709
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ebabc709
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ebabc709

Branch: refs/heads/YARN-5881
Commit: ebabc7094c6bcbd9063744331c69e3fba615fa62
Parents: a53b8b6
Author: Akira Ajisaka <aa...@apache.org>
Authored: Wed Aug 9 13:16:31 2017 +0900
Committer: Akira Ajisaka <aa...@apache.org>
Committed: Wed Aug 9 13:18:16 2017 +0900

----------------------------------------------------------------------
 hadoop-client-modules/hadoop-client-check-invariants/pom.xml      | 1 -
 hadoop-client-modules/hadoop-client-check-test-invariants/pom.xml | 1 -
 pom.xml                                                           | 2 +-
 3 files changed, 1 insertion(+), 3 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ebabc709/hadoop-client-modules/hadoop-client-check-invariants/pom.xml
----------------------------------------------------------------------
diff --git a/hadoop-client-modules/hadoop-client-check-invariants/pom.xml b/hadoop-client-modules/hadoop-client-check-invariants/pom.xml
index e495a69..2f31fa6 100644
--- a/hadoop-client-modules/hadoop-client-check-invariants/pom.xml
+++ b/hadoop-client-modules/hadoop-client-check-invariants/pom.xml
@@ -46,7 +46,6 @@
       <plugin>
         <groupId>org.apache.maven.plugins</groupId>
         <artifactId>maven-enforcer-plugin</artifactId>
-        <version>1.4</version>
         <dependencies>
           <dependency>
             <groupId>org.codehaus.mojo</groupId>

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ebabc709/hadoop-client-modules/hadoop-client-check-test-invariants/pom.xml
----------------------------------------------------------------------
diff --git a/hadoop-client-modules/hadoop-client-check-test-invariants/pom.xml b/hadoop-client-modules/hadoop-client-check-test-invariants/pom.xml
index 68d1f5b..0e23db9 100644
--- a/hadoop-client-modules/hadoop-client-check-test-invariants/pom.xml
+++ b/hadoop-client-modules/hadoop-client-check-test-invariants/pom.xml
@@ -50,7 +50,6 @@
       <plugin>
         <groupId>org.apache.maven.plugins</groupId>
         <artifactId>maven-enforcer-plugin</artifactId>
-        <version>1.4</version>
         <dependencies>
           <dependency>
             <groupId>org.codehaus.mojo</groupId>

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ebabc709/pom.xml
----------------------------------------------------------------------
diff --git a/pom.xml b/pom.xml
index d82cd9f..22a4b59 100644
--- a/pom.xml
+++ b/pom.xml
@@ -97,7 +97,7 @@ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xs
     <maven-antrun-plugin.version>1.7</maven-antrun-plugin.version>
     <maven-assembly-plugin.version>2.4</maven-assembly-plugin.version>
     <maven-dependency-plugin.version>2.10</maven-dependency-plugin.version>
-    <maven-enforcer-plugin.version>1.4.1</maven-enforcer-plugin.version>
+    <maven-enforcer-plugin.version>3.0.0-M1</maven-enforcer-plugin.version>
     <maven-javadoc-plugin.version>2.10.4</maven-javadoc-plugin.version>
     <maven-gpg-plugin.version>1.5</maven-gpg-plugin.version>
     <maven-remote-resources-plugin.version>1.5</maven-remote-resources-plugin.version>


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[14/50] [abbrv] hadoop git commit: HDFS-12264. DataNode uses a deprecated method IoUtils#cleanup. Contributed by Ajay Yadav.

Posted by wa...@apache.org.
HDFS-12264. DataNode uses a deprecated method IoUtils#cleanup. Contributed by Ajay Yadav.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/bc206806
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/bc206806
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/bc206806

Branch: refs/heads/YARN-5881
Commit: bc206806dadc5dc85f182d98d859307cfb33172b
Parents: adb84f3
Author: Arpit Agarwal <ar...@apache.org>
Authored: Mon Aug 7 15:05:10 2017 -0700
Committer: Arpit Agarwal <ar...@apache.org>
Committed: Mon Aug 7 15:05:10 2017 -0700

----------------------------------------------------------------------
 .../hadoop-common/src/main/java/org/apache/hadoop/io/IOUtils.java  | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/bc206806/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/IOUtils.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/IOUtils.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/IOUtils.java
index 1574431..46ea1c8 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/IOUtils.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/IOUtils.java
@@ -293,7 +293,7 @@ public class IOUtils {
    */
   public static void closeStream(java.io.Closeable stream) {
     if (stream != null) {
-      cleanup(null, stream);
+      cleanupWithLogger(null, stream);
     }
   }
   


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[23/50] [abbrv] hadoop git commit: YARN-6879. TestLeafQueue.testDRFUserLimits() has commented out code (Contributed by Angela Wang via Daniel Templeton)

Posted by wa...@apache.org.
YARN-6879. TestLeafQueue.testDRFUserLimits() has commented out code
(Contributed by Angela Wang via Daniel Templeton)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e0c24145
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e0c24145
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e0c24145

Branch: refs/heads/YARN-5881
Commit: e0c24145d2c2a7d2cf10864fb4800cb1dcbc2977
Parents: 1794de3
Author: Daniel Templeton <te...@apache.org>
Authored: Tue Aug 8 13:35:22 2017 -0700
Committer: Daniel Templeton <te...@apache.org>
Committed: Tue Aug 8 13:35:22 2017 -0700

----------------------------------------------------------------------
 .../server/resourcemanager/scheduler/capacity/TestLeafQueue.java   | 2 --
 1 file changed, 2 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e0c24145/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java
index 2864d7f..d45f756 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java
@@ -820,8 +820,6 @@ public class TestLeafQueue {
       applyCSAssignment(clusterResource, assign, b, nodes, apps);
     } while (assign.getResource().getMemorySize() > 0 &&
         assign.getAssignmentInformation().getNumReservations() == 0);
-    //LOG.info("user_0: " + queueUser0.getUsed());
-    //LOG.info("user_1: " + queueUser1.getUsed());
 
     assertTrue("Verify user_0 got resources ", queueUser0.getUsed()
         .getMemorySize() > 0);


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[43/50] [abbrv] hadoop git commit: HDFS-11957. Enable POSIX ACL inheritance by default. Contributed by John Zhuge.

Posted by wa...@apache.org.
HDFS-11957. Enable POSIX ACL inheritance by default. Contributed by John Zhuge.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/312e57b9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/312e57b9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/312e57b9

Branch: refs/heads/YARN-5881
Commit: 312e57b95477ec95e6735f5721c646ad1df019f8
Parents: a8b7546
Author: John Zhuge <jz...@apache.org>
Authored: Fri Jun 9 08:42:16 2017 -0700
Committer: John Zhuge <jz...@apache.org>
Committed: Thu Aug 10 10:30:47 2017 -0700

----------------------------------------------------------------------
 .../java/org/apache/hadoop/hdfs/DFSConfigKeys.java    |  2 +-
 .../hadoop-hdfs/src/main/resources/hdfs-default.xml   |  2 +-
 .../src/site/markdown/HdfsPermissionsGuide.md         |  2 +-
 .../test/java/org/apache/hadoop/cli/TestAclCLI.java   |  2 ++
 .../hadoop/hdfs/server/namenode/FSAclBaseTest.java    |  8 ++++----
 .../hdfs/server/namenode/TestFSImageWithAcl.java      | 14 ++++++++------
 6 files changed, 17 insertions(+), 13 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/312e57b9/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
index dc9bf76..f4c383e 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
@@ -269,7 +269,7 @@ public class DFSConfigKeys extends CommonConfigurationKeys {
   public static final String DFS_NAMENODE_POSIX_ACL_INHERITANCE_ENABLED_KEY =
       "dfs.namenode.posix.acl.inheritance.enabled";
   public static final boolean
-      DFS_NAMENODE_POSIX_ACL_INHERITANCE_ENABLED_DEFAULT = false;
+      DFS_NAMENODE_POSIX_ACL_INHERITANCE_ENABLED_DEFAULT = true;
   public static final String  DFS_NAMENODE_XATTRS_ENABLED_KEY = "dfs.namenode.xattrs.enabled";
   public static final boolean DFS_NAMENODE_XATTRS_ENABLED_DEFAULT = true;
   public static final String  DFS_ADMIN = "dfs.cluster.administrators";

http://git-wip-us.apache.org/repos/asf/hadoop/blob/312e57b9/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml b/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
index 4942967..03becc9 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
@@ -459,7 +459,7 @@
 
   <property>
     <name>dfs.namenode.posix.acl.inheritance.enabled</name>
-    <value>false</value>
+    <value>true</value>
     <description>
       Set to true to enable POSIX style ACL inheritance. When it is enabled
       and the create request comes from a compatible client, the NameNode

http://git-wip-us.apache.org/repos/asf/hadoop/blob/312e57b9/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsPermissionsGuide.md
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsPermissionsGuide.md b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsPermissionsGuide.md
index c502534..82b5cec 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsPermissionsGuide.md
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsPermissionsGuide.md
@@ -322,7 +322,7 @@ Configuration Parameters
 
 *   `dfs.namenode.posix.acl.inheritance.enabled`
 
-    Set to true to enable POSIX style ACL inheritance. Disabled by default.
+    Set to true to enable POSIX style ACL inheritance. Enabled by default.
     When it is enabled and the create request comes from a compatible client,
     the NameNode will apply default ACLs from the parent directory to
     the create mode and ignore the client umask. If no default ACL is found,

http://git-wip-us.apache.org/repos/asf/hadoop/blob/312e57b9/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestAclCLI.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestAclCLI.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestAclCLI.java
index 75111bb..9cf2180 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestAclCLI.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestAclCLI.java
@@ -34,6 +34,8 @@ public class TestAclCLI extends CLITestHelperDFS {
 
   protected void initConf() {
     conf.setBoolean(DFSConfigKeys.DFS_NAMENODE_ACLS_ENABLED_KEY, true);
+    conf.setBoolean(
+        DFSConfigKeys.DFS_NAMENODE_POSIX_ACL_INHERITANCE_ENABLED_KEY, false);
   }
 
   @Before

http://git-wip-us.apache.org/repos/asf/hadoop/blob/312e57b9/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSAclBaseTest.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSAclBaseTest.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSAclBaseTest.java
index 60b0ab1..93a83fd 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSAclBaseTest.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSAclBaseTest.java
@@ -903,7 +903,7 @@ public abstract class FSAclBaseTest {
     assertArrayEquals(new AclEntry[] {
       aclEntry(ACCESS, USER, "foo", ALL),
       aclEntry(ACCESS, GROUP, READ_EXECUTE) }, returned);
-    assertPermission(filePath, (short)010640);
+    assertPermission(filePath, (short)010660);
     assertAclFeature(filePath, true);
   }
 
@@ -1003,7 +1003,7 @@ public abstract class FSAclBaseTest {
       aclEntry(DEFAULT, GROUP, READ_EXECUTE),
       aclEntry(DEFAULT, MASK, ALL),
       aclEntry(DEFAULT, OTHER, NONE) }, returned);
-    assertPermission(dirPath, (short)010750);
+    assertPermission(dirPath, (short)010770);
     assertAclFeature(dirPath, true);
   }
 
@@ -1120,7 +1120,7 @@ public abstract class FSAclBaseTest {
     s = fs.getAclStatus(filePath);
     returned = s.getEntries().toArray(new AclEntry[0]);
     assertArrayEquals(expected, returned);
-    assertPermission(filePath, (short)010640);
+    assertPermission(filePath, (short)010660);
     assertAclFeature(filePath, true);
   }
 
@@ -1149,7 +1149,7 @@ public abstract class FSAclBaseTest {
     s = fs.getAclStatus(subdirPath);
     returned = s.getEntries().toArray(new AclEntry[0]);
     assertArrayEquals(expected, returned);
-    assertPermission(subdirPath, (short)010750);
+    assertPermission(subdirPath, (short)010770);
     assertAclFeature(subdirPath, true);
   }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/312e57b9/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImageWithAcl.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImageWithAcl.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImageWithAcl.java
index 48d3dea..d9c24d9 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImageWithAcl.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImageWithAcl.java
@@ -138,13 +138,15 @@ public class TestFSImageWithAcl {
       aclEntry(DEFAULT, MASK, ALL),
       aclEntry(DEFAULT, OTHER, READ_EXECUTE) };
 
+    short permExpected = (short)010775;
+
     AclEntry[] fileReturned = fs.getAclStatus(filePath).getEntries()
       .toArray(new AclEntry[0]);
     Assert.assertArrayEquals(fileExpected, fileReturned);
     AclEntry[] subdirReturned = fs.getAclStatus(subdirPath).getEntries()
       .toArray(new AclEntry[0]);
     Assert.assertArrayEquals(subdirExpected, subdirReturned);
-    assertPermission(fs, subdirPath, (short)010755);
+    assertPermission(fs, subdirPath, permExpected);
 
     restart(fs, persistNamespace);
 
@@ -154,7 +156,7 @@ public class TestFSImageWithAcl {
     subdirReturned = fs.getAclStatus(subdirPath).getEntries()
       .toArray(new AclEntry[0]);
     Assert.assertArrayEquals(subdirExpected, subdirReturned);
-    assertPermission(fs, subdirPath, (short)010755);
+    assertPermission(fs, subdirPath, permExpected);
 
     aclSpec = Lists.newArrayList(aclEntry(DEFAULT, USER, "foo", READ_WRITE));
     fs.modifyAclEntries(dirPath, aclSpec);
@@ -165,7 +167,7 @@ public class TestFSImageWithAcl {
     subdirReturned = fs.getAclStatus(subdirPath).getEntries()
       .toArray(new AclEntry[0]);
     Assert.assertArrayEquals(subdirExpected, subdirReturned);
-    assertPermission(fs, subdirPath, (short)010755);
+    assertPermission(fs, subdirPath, permExpected);
 
     restart(fs, persistNamespace);
 
@@ -175,7 +177,7 @@ public class TestFSImageWithAcl {
     subdirReturned = fs.getAclStatus(subdirPath).getEntries()
       .toArray(new AclEntry[0]);
     Assert.assertArrayEquals(subdirExpected, subdirReturned);
-    assertPermission(fs, subdirPath, (short)010755);
+    assertPermission(fs, subdirPath, permExpected);
 
     fs.removeAcl(dirPath);
 
@@ -185,7 +187,7 @@ public class TestFSImageWithAcl {
     subdirReturned = fs.getAclStatus(subdirPath).getEntries()
       .toArray(new AclEntry[0]);
     Assert.assertArrayEquals(subdirExpected, subdirReturned);
-    assertPermission(fs, subdirPath, (short)010755);
+    assertPermission(fs, subdirPath, permExpected);
 
     restart(fs, persistNamespace);
 
@@ -195,7 +197,7 @@ public class TestFSImageWithAcl {
     subdirReturned = fs.getAclStatus(subdirPath).getEntries()
       .toArray(new AclEntry[0]);
     Assert.assertArrayEquals(subdirExpected, subdirReturned);
-    assertPermission(fs, subdirPath, (short)010755);
+    assertPermission(fs, subdirPath, permExpected);
   }
 
   @Test


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[13/50] [abbrv] hadoop git commit: YARN-4161. Capacity Scheduler : Assign single or multiple containers per heart beat driven by configuration. (Wei Yan via wangda)

Posted by wa...@apache.org.
YARN-4161. Capacity Scheduler : Assign single or multiple containers per heart beat driven by configuration. (Wei Yan via wangda)

Change-Id: Ic441ae4e0bf72e7232411eb54243ec143d5fd0d3


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/adb84f34
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/adb84f34
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/adb84f34

Branch: refs/heads/YARN-5881
Commit: adb84f34db7e1cdcd72aa8e3deb464c48da9e353
Parents: a3a9c97
Author: Wangda Tan <wa...@apache.org>
Authored: Mon Aug 7 11:32:12 2017 -0700
Committer: Wangda Tan <wa...@apache.org>
Committed: Mon Aug 7 11:32:21 2017 -0700

----------------------------------------------------------------------
 .../scheduler/capacity/CapacityScheduler.java   |  53 ++++-
 .../CapacitySchedulerConfiguration.java         |  23 ++
 .../capacity/TestCapacityScheduler.java         | 232 ++++++++++++++++++-
 3 files changed, 289 insertions(+), 19 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/adb84f34/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
index 2ccaf63..3286982 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
@@ -94,11 +94,9 @@ import org.apache.hadoop.yarn.server.resourcemanager.scheduler.QueueInvalidExcep
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.QueueMetrics;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceLimits;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceUsage;
-import org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedContainerChangeRequest;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplication;
 
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerDynamicEditException;
-import org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.activities.ActivitiesLogger;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.activities.ActivitiesManager;
@@ -163,6 +161,9 @@ public class CapacityScheduler extends
 
   private int offswitchPerHeartbeatLimit;
 
+  private boolean assignMultipleEnabled;
+
+  private int maxAssignPerHeartbeat;
 
   @Override
   public void setConf(Configuration conf) {
@@ -308,6 +309,9 @@ public class CapacityScheduler extends
       asyncScheduleInterval = this.conf.getLong(ASYNC_SCHEDULER_INTERVAL,
           DEFAULT_ASYNC_SCHEDULER_INTERVAL);
 
+      this.assignMultipleEnabled = this.conf.getAssignMultipleEnabled();
+      this.maxAssignPerHeartbeat = this.conf.getMaxAssignPerHeartbeat();
+
       // number of threads for async scheduling
       int maxAsyncSchedulingThreads = this.conf.getInt(
           CapacitySchedulerConfiguration.SCHEDULE_ASYNCHRONOUSLY_MAXIMUM_THREAD,
@@ -1109,17 +1113,29 @@ public class CapacityScheduler extends
       .getAssignmentInformation().getReserved());
   }
 
-  private boolean canAllocateMore(CSAssignment assignment, int offswitchCount) {
-    if (null != assignment && Resources.greaterThan(getResourceCalculator(),
-        getClusterResource(), assignment.getResource(), Resources.none())
-        && offswitchCount < offswitchPerHeartbeatLimit) {
-      // And it should not be a reserved container
-      if (assignment.getAssignmentInformation().getNumReservations() == 0) {
-        return true;
-      }
+  private boolean canAllocateMore(CSAssignment assignment, int offswitchCount,
+      int assignedContainers) {
+    // Current assignment shouldn't be empty
+    if (assignment == null
+            || Resources.equals(assignment.getResource(), Resources.none())) {
+      return false;
     }
 
-    return false;
+    // offswitch assignment should be under threshold
+    if (offswitchCount >= offswitchPerHeartbeatLimit) {
+      return false;
+    }
+
+    // And it should not be a reserved container
+    if (assignment.getAssignmentInformation().getNumReservations() > 0) {
+      return false;
+    }
+
+    // assignMultipleEnabled should be ON,
+    // and assignedContainers should be under threshold
+    return assignMultipleEnabled
+        && (maxAssignPerHeartbeat == -1
+            || assignedContainers < maxAssignPerHeartbeat);
   }
 
   /**
@@ -1131,6 +1147,7 @@ public class CapacityScheduler extends
     FiCaSchedulerNode node = getNode(nodeId);
     if (null != node) {
       int offswitchCount = 0;
+      int assignedContainers = 0;
 
       PlacementSet<FiCaSchedulerNode> ps = new SimplePlacementSet<>(node);
       CSAssignment assignment = allocateContainersToNode(ps, withNodeHeartbeat);
@@ -1141,7 +1158,13 @@ public class CapacityScheduler extends
           offswitchCount++;
         }
 
-        while (canAllocateMore(assignment, offswitchCount)) {
+        if (Resources.greaterThan(calculator, getClusterResource(),
+            assignment.getResource(), Resources.none())) {
+          assignedContainers++;
+        }
+
+        while (canAllocateMore(assignment, offswitchCount,
+            assignedContainers)) {
           // Try to see if it is possible to allocate multiple container for
           // the same node heartbeat
           assignment = allocateContainersToNode(ps, true);
@@ -1150,6 +1173,12 @@ public class CapacityScheduler extends
               && assignment.getType() == NodeType.OFF_SWITCH) {
             offswitchCount++;
           }
+
+          if (null != assignment
+              && Resources.greaterThan(calculator, getClusterResource(),
+                  assignment.getResource(), Resources.none())) {
+            assignedContainers++;
+          }
         }
 
         if (offswitchCount >= offswitchPerHeartbeatLimit) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/adb84f34/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java
index 1e29d50..13b9ff6 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java
@@ -301,6 +301,21 @@ public class CapacitySchedulerConfiguration extends ReservationSchedulerConfigur
   @Private
   public static final boolean DEFAULT_LAZY_PREEMPTION_ENABLED = false;
 
+  @Private
+  public static final String ASSIGN_MULTIPLE_ENABLED = PREFIX
+      + "per-node-heartbeat.multiple-assignments-enabled";
+
+  @Private
+  public static final boolean DEFAULT_ASSIGN_MULTIPLE_ENABLED = true;
+
+  /** Maximum number of containers to assign on each check-in. */
+  @Private
+  public static final String MAX_ASSIGN_PER_HEARTBEAT = PREFIX
+      + "per-node-heartbeat.maximum-container-assignments";
+
+  @Private
+  public static final int DEFAULT_MAX_ASSIGN_PER_HEARTBEAT = -1;
+
   AppPriorityACLConfigurationParser priorityACLConfig = new AppPriorityACLConfigurationParser();
 
   public CapacitySchedulerConfiguration() {
@@ -1473,4 +1488,12 @@ public class CapacitySchedulerConfiguration extends ReservationSchedulerConfigur
     }
     return userWeights;
   }
+
+  public boolean getAssignMultipleEnabled() {
+    return getBoolean(ASSIGN_MULTIPLE_ENABLED, DEFAULT_ASSIGN_MULTIPLE_ENABLED);
+  }
+
+  public int getMaxAssignPerHeartbeat() {
+    return getInt(MAX_ASSIGN_PER_HEARTBEAT, DEFAULT_MAX_ASSIGN_PER_HEARTBEAT);
+  }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/adb84f34/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java
index f51f771..64e0df4 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java
@@ -233,6 +233,17 @@ public class TestCapacityScheduler {
     }
   }
 
+  private NodeManager registerNode(ResourceManager rm, String hostName,
+      int containerManagerPort, int httpPort, String rackName,
+          Resource capability) throws IOException, YarnException {
+    NodeManager nm = new NodeManager(hostName,
+        containerManagerPort, httpPort, rackName, capability, rm);
+    NodeAddedSchedulerEvent nodeAddEvent1 =
+        new NodeAddedSchedulerEvent(rm.getRMContext().getRMNodes()
+            .get(nm.getNodeId()));
+    rm.getResourceScheduler().handle(nodeAddEvent1);
+    return nm;
+  }
 
   @Test (timeout = 30000)
   public void testConfValidation() throws Exception {
@@ -267,12 +278,12 @@ public class TestCapacityScheduler {
     }
   }
 
-  private org.apache.hadoop.yarn.server.resourcemanager.NodeManager
+  private NodeManager
       registerNode(String hostName, int containerManagerPort, int httpPort,
           String rackName, Resource capability)
           throws IOException, YarnException {
-    org.apache.hadoop.yarn.server.resourcemanager.NodeManager nm =
-        new org.apache.hadoop.yarn.server.resourcemanager.NodeManager(
+    NodeManager nm =
+        new NodeManager(
             hostName, containerManagerPort, httpPort, rackName, capability,
             resourceManager);
     NodeAddedSchedulerEvent nodeAddEvent1 =
@@ -400,8 +411,216 @@ public class TestCapacityScheduler {
     LOG.info("--- END: testCapacityScheduler ---");
   }
 
-  private void nodeUpdate(
-      org.apache.hadoop.yarn.server.resourcemanager.NodeManager nm) {
+  @Test
+  public void testNotAssignMultiple() throws Exception {
+    LOG.info("--- START: testNotAssignMultiple ---");
+    ResourceManager rm = new ResourceManager() {
+      @Override
+      protected RMNodeLabelsManager createNodeLabelManager() {
+        RMNodeLabelsManager mgr = new NullRMNodeLabelsManager();
+        mgr.init(getConfig());
+        return mgr;
+      }
+    };
+    CapacitySchedulerConfiguration csConf =
+        new CapacitySchedulerConfiguration();
+    csConf.setBoolean(
+        CapacitySchedulerConfiguration.ASSIGN_MULTIPLE_ENABLED, false);
+    setupQueueConfiguration(csConf);
+    YarnConfiguration conf = new YarnConfiguration(csConf);
+    conf.setClass(YarnConfiguration.RM_SCHEDULER, CapacityScheduler.class,
+        ResourceScheduler.class);
+    rm.init(conf);
+    rm.getRMContext().getContainerTokenSecretManager().rollMasterKey();
+    rm.getRMContext().getNMTokenSecretManager().rollMasterKey();
+    ((AsyncDispatcher) rm.getRMContext().getDispatcher()).start();
+    RMContext mC = mock(RMContext.class);
+    when(mC.getConfigurationProvider()).thenReturn(
+        new LocalConfigurationProvider());
+
+    // Register node1
+    String host0 = "host_0";
+    NodeManager nm0 =
+        registerNode(rm, host0, 1234, 2345, NetworkTopology.DEFAULT_RACK,
+            Resources.createResource(10 * GB, 10));
+
+    // ResourceRequest priorities
+    Priority priority0 = Priority.newInstance(0);
+    Priority priority1 = Priority.newInstance(1);
+
+    // Submit an application
+    Application application0 = new Application("user_0", "a1", rm);
+    application0.submit();
+    application0.addNodeManager(host0, 1234, nm0);
+
+    Resource capability00 = Resources.createResource(1 * GB, 1);
+    application0.addResourceRequestSpec(priority0, capability00);
+
+    Resource capability01 = Resources.createResource(2 * GB, 1);
+    application0.addResourceRequestSpec(priority1, capability01);
+
+    Task task00 =
+        new Task(application0, priority0, new String[] {host0});
+    Task task01 =
+        new Task(application0, priority1, new String[] {host0});
+    application0.addTask(task00);
+    application0.addTask(task01);
+
+    // Submit another application
+    Application application1 = new Application("user_1", "b2", rm);
+    application1.submit();
+    application1.addNodeManager(host0, 1234, nm0);
+
+    Resource capability10 = Resources.createResource(3 * GB, 1);
+    application1.addResourceRequestSpec(priority0, capability10);
+
+    Resource capability11 = Resources.createResource(4 * GB, 1);
+    application1.addResourceRequestSpec(priority1, capability11);
+
+    Task task10 = new Task(application1, priority0, new String[] {host0});
+    Task task11 = new Task(application1, priority1, new String[] {host0});
+    application1.addTask(task10);
+    application1.addTask(task11);
+
+    // Send resource requests to the scheduler
+    application0.schedule();
+
+    application1.schedule();
+
+    // Send a heartbeat to kick the tires on the Scheduler
+    LOG.info("Kick!");
+
+    // task00, used=1G
+    nodeUpdate(rm, nm0);
+
+    // Get allocations from the scheduler
+    application0.schedule();
+    application1.schedule();
+    // 1 Task per heart beat should be scheduled
+    checkNodeResourceUsage(3 * GB, nm0); // task00 (1G)
+    checkApplicationResourceUsage(0 * GB, application0);
+    checkApplicationResourceUsage(3 * GB, application1);
+
+    // Another heartbeat
+    nodeUpdate(rm, nm0);
+    application0.schedule();
+    checkApplicationResourceUsage(1 * GB, application0);
+    application1.schedule();
+    checkApplicationResourceUsage(3 * GB, application1);
+    checkNodeResourceUsage(4 * GB, nm0);
+    LOG.info("--- START: testNotAssignMultiple ---");
+  }
+
+  @Test
+  public void testAssignMultiple() throws Exception {
+    LOG.info("--- START: testAssignMultiple ---");
+    ResourceManager rm = new ResourceManager() {
+      @Override
+      protected RMNodeLabelsManager createNodeLabelManager() {
+        RMNodeLabelsManager mgr = new NullRMNodeLabelsManager();
+        mgr.init(getConfig());
+        return mgr;
+      }
+    };
+    CapacitySchedulerConfiguration csConf =
+        new CapacitySchedulerConfiguration();
+    csConf.setBoolean(
+        CapacitySchedulerConfiguration.ASSIGN_MULTIPLE_ENABLED, true);
+    // Each heartbeat will assign 2 containers at most
+    csConf.setInt(CapacitySchedulerConfiguration.MAX_ASSIGN_PER_HEARTBEAT, 2);
+    setupQueueConfiguration(csConf);
+    YarnConfiguration conf = new YarnConfiguration(csConf);
+    conf.setClass(YarnConfiguration.RM_SCHEDULER, CapacityScheduler.class,
+        ResourceScheduler.class);
+    rm.init(conf);
+    rm.getRMContext().getContainerTokenSecretManager().rollMasterKey();
+    rm.getRMContext().getNMTokenSecretManager().rollMasterKey();
+    ((AsyncDispatcher) rm.getRMContext().getDispatcher()).start();
+    RMContext mC = mock(RMContext.class);
+    when(mC.getConfigurationProvider()).thenReturn(
+            new LocalConfigurationProvider());
+
+    // Register node1
+    String host0 = "host_0";
+    NodeManager nm0 =
+        registerNode(rm, host0, 1234, 2345, NetworkTopology.DEFAULT_RACK,
+            Resources.createResource(10 * GB, 10));
+
+    // ResourceRequest priorities
+    Priority priority0 = Priority.newInstance(0);
+    Priority priority1 = Priority.newInstance(1);
+
+    // Submit an application
+    Application application0 = new Application("user_0", "a1", rm);
+    application0.submit();
+    application0.addNodeManager(host0, 1234, nm0);
+
+    Resource capability00 = Resources.createResource(1 * GB, 1);
+    application0.addResourceRequestSpec(priority0, capability00);
+
+    Resource capability01 = Resources.createResource(2 * GB, 1);
+    application0.addResourceRequestSpec(priority1, capability01);
+
+    Task task00 = new Task(application0, priority0, new String[] {host0});
+    Task task01 = new Task(application0, priority1, new String[] {host0});
+    application0.addTask(task00);
+    application0.addTask(task01);
+
+    // Submit another application
+    Application application1 = new Application("user_1", "b2", rm);
+    application1.submit();
+    application1.addNodeManager(host0, 1234, nm0);
+
+    Resource capability10 = Resources.createResource(3 * GB, 1);
+    application1.addResourceRequestSpec(priority0, capability10);
+
+    Resource capability11 = Resources.createResource(4 * GB, 1);
+    application1.addResourceRequestSpec(priority1, capability11);
+
+    Task task10 =
+            new Task(application1, priority0, new String[] {host0});
+    Task task11 =
+            new Task(application1, priority1, new String[] {host0});
+    application1.addTask(task10);
+    application1.addTask(task11);
+
+    // Send resource requests to the scheduler
+    application0.schedule();
+
+    application1.schedule();
+
+    // Send a heartbeat to kick the tires on the Scheduler
+    LOG.info("Kick!");
+
+    // task_0_0, used=1G
+    nodeUpdate(rm, nm0);
+
+    // Get allocations from the scheduler
+    application0.schedule();
+    application1.schedule();
+    // 1 Task per heart beat should be scheduled
+    checkNodeResourceUsage(4 * GB, nm0); // task00 (1G)
+    checkApplicationResourceUsage(1 * GB, application0);
+    checkApplicationResourceUsage(3 * GB, application1);
+
+    // Another heartbeat
+    nodeUpdate(rm, nm0);
+    application0.schedule();
+    checkApplicationResourceUsage(3 * GB, application0);
+    application1.schedule();
+    checkApplicationResourceUsage(7 * GB, application1);
+    checkNodeResourceUsage(10 * GB, nm0);
+    LOG.info("--- START: testAssignMultiple ---");
+  }
+
+  private void nodeUpdate(ResourceManager rm, NodeManager nm) {
+    RMNode node = rm.getRMContext().getRMNodes().get(nm.getNodeId());
+    // Send a heartbeat to kick the tires on the Scheduler
+    NodeUpdateSchedulerEvent nodeUpdate = new NodeUpdateSchedulerEvent(node);
+    rm.getResourceScheduler().handle(nodeUpdate);
+  }
+
+  private void nodeUpdate(NodeManager nm) {
     RMNode node = resourceManager.getRMContext().getRMNodes().get(nm.getNodeId());
     // Send a heartbeat to kick the tires on the Scheduler
     NodeUpdateSchedulerEvent nodeUpdate = new NodeUpdateSchedulerEvent(node);
@@ -699,8 +918,7 @@ public class TestCapacityScheduler {
     Assert.assertEquals(expected, application.getUsedResources().getMemorySize());
   }
 
-  private void checkNodeResourceUsage(int expected,
-      org.apache.hadoop.yarn.server.resourcemanager.NodeManager node) {
+  private void checkNodeResourceUsage(int expected, NodeManager node) {
     Assert.assertEquals(expected, node.getUsed().getMemorySize());
     node.checkResourceUsage();
   }


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[03/50] [abbrv] hadoop git commit: HDFS-12224. Add tests to TestJournalNodeSync for sync after JN downtime. Contributed by Hanisha Koneru.

Posted by wa...@apache.org.
HDFS-12224. Add tests to TestJournalNodeSync for sync after JN downtime. Contributed by Hanisha Koneru.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/bbc6d254
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/bbc6d254
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/bbc6d254

Branch: refs/heads/YARN-5881
Commit: bbc6d254c8a953abba69415d80edeede3ee6269d
Parents: fe33417
Author: Arpit Agarwal <ar...@apache.org>
Authored: Fri Aug 4 12:51:33 2017 -0700
Committer: Arpit Agarwal <ar...@apache.org>
Committed: Fri Aug 4 12:51:33 2017 -0700

----------------------------------------------------------------------
 .../hadoop/hdfs/qjournal/server/Journal.java    |   3 +-
 .../hdfs/qjournal/server/JournalMetrics.java    |  11 +
 .../hdfs/qjournal/server/JournalNodeSyncer.java |   4 +
 .../hdfs/qjournal/TestJournalNodeSync.java      | 265 -----------
 .../hdfs/qjournal/server/TestJournalNode.java   |   6 +-
 .../qjournal/server/TestJournalNodeSync.java    | 439 +++++++++++++++++++
 6 files changed, 458 insertions(+), 270 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/bbc6d254/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/Journal.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/Journal.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/Journal.java
index 0041d5e..0f4091d 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/Journal.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/Journal.java
@@ -286,8 +286,7 @@ public class Journal implements Closeable {
     fjm.setLastReadableTxId(val);
   }
 
-  @VisibleForTesting
-  JournalMetrics getMetricsForTests() {
+  JournalMetrics getMetrics() {
     return metrics;
   }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/bbc6d254/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalMetrics.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalMetrics.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalMetrics.java
index cffe2c1..fcfd901 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalMetrics.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalMetrics.java
@@ -45,6 +45,9 @@ class JournalMetrics {
   
   @Metric("Number of batches written where this node was lagging")
   MutableCounterLong batchesWrittenWhileLagging;
+
+  @Metric("Number of edit logs downloaded by JournalNodeSyncer")
+  private MutableCounterLong numEditLogsSynced;
   
   private final int[] QUANTILE_INTERVALS = new int[] {
       1*60, // 1m
@@ -120,4 +123,12 @@ class JournalMetrics {
       q.add(us);
     }
   }
+
+  public MutableCounterLong getNumEditLogsSynced() {
+    return numEditLogsSynced;
+  }
+
+  public void incrNumEditLogsSynced() {
+    numEditLogsSynced.incr();
+  }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/bbc6d254/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalNodeSyncer.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalNodeSyncer.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalNodeSyncer.java
index 479f6a0..537ba0a 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalNodeSyncer.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalNodeSyncer.java
@@ -77,6 +77,7 @@ public class JournalNodeSyncer {
   private final long journalSyncInterval;
   private final int logSegmentTransferTimeout;
   private final DataTransferThrottler throttler;
+  private final JournalMetrics metrics;
 
   JournalNodeSyncer(JournalNode jouranlNode, Journal journal, String jid,
       Configuration conf) {
@@ -93,6 +94,7 @@ public class JournalNodeSyncer {
         DFSConfigKeys.DFS_EDIT_LOG_TRANSFER_TIMEOUT_KEY,
         DFSConfigKeys.DFS_EDIT_LOG_TRANSFER_TIMEOUT_DEFAULT);
     throttler = getThrottler(conf);
+    metrics = journal.getMetrics();
   }
 
   void stopSync() {
@@ -411,6 +413,8 @@ public class JournalNodeSyncer {
         LOG.warn("Deleting " + tmpEditsFile + " has failed");
       }
       return false;
+    } else {
+      metrics.incrNumEditLogsSynced();
     }
     return true;
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/bbc6d254/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/TestJournalNodeSync.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/TestJournalNodeSync.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/TestJournalNodeSync.java
deleted file mode 100644
index 8415a6f..0000000
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/TestJournalNodeSync.java
+++ /dev/null
@@ -1,265 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.hadoop.hdfs.qjournal;
-
-import com.google.common.base.Supplier;
-import com.google.common.collect.Lists;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.fs.FileSystem;
-import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.hdfs.DFSConfigKeys;
-import org.apache.hadoop.hdfs.HdfsConfiguration;
-import org.apache.hadoop.hdfs.MiniDFSCluster;
-import org.apache.hadoop.hdfs.server.common.Storage.StorageDirectory;
-import org.apache.hadoop.hdfs.server.namenode.FSNamesystem;
-import org.apache.hadoop.hdfs.server.namenode.FileJournalManager.EditLogFile;
-import static org.apache.hadoop.hdfs.server.namenode.FileJournalManager
-    .getLogFile;
-
-import org.apache.hadoop.test.GenericTestUtils;
-import org.junit.After;
-import org.junit.Assert;
-import org.junit.Before;
-import org.junit.Test;
-
-import java.io.File;
-import java.io.IOException;
-import java.util.List;
-import java.util.Random;
-
-/**
- * Unit test for Journal Node formatting upon re-installation and syncing.
- */
-public class TestJournalNodeSync {
-  private MiniQJMHACluster qjmhaCluster;
-  private MiniDFSCluster dfsCluster;
-  private MiniJournalCluster jCluster;
-  private FileSystem fs;
-  private FSNamesystem namesystem;
-  private int editsPerformed = 0;
-  private final String jid = "ns1";
-
-  @Before
-  public void setUpMiniCluster() throws IOException {
-    final Configuration conf = new HdfsConfiguration();
-    conf.setBoolean(DFSConfigKeys.DFS_JOURNALNODE_ENABLE_SYNC_KEY, true);
-    conf.setLong(DFSConfigKeys.DFS_JOURNALNODE_SYNC_INTERVAL_KEY, 1000L);
-    qjmhaCluster = new MiniQJMHACluster.Builder(conf).setNumNameNodes(2)
-      .build();
-    dfsCluster = qjmhaCluster.getDfsCluster();
-    jCluster = qjmhaCluster.getJournalCluster();
-
-    dfsCluster.transitionToActive(0);
-    fs = dfsCluster.getFileSystem(0);
-    namesystem = dfsCluster.getNamesystem(0);
-  }
-
-  @After
-  public void shutDownMiniCluster() throws IOException {
-    if (qjmhaCluster != null) {
-      qjmhaCluster.shutdown();
-    }
-  }
-
-  @Test(timeout=30000)
-  public void testJournalNodeSync() throws Exception {
-    File firstJournalDir = jCluster.getJournalDir(0, jid);
-    File firstJournalCurrentDir = new StorageDirectory(firstJournalDir)
-        .getCurrentDir();
-
-    // Generate some edit logs and delete one.
-    long firstTxId = generateEditLog();
-    generateEditLog();
-
-    File missingLog = deleteEditLog(firstJournalCurrentDir, firstTxId);
-
-    GenericTestUtils.waitFor(editLogExists(Lists.newArrayList(missingLog)),
-        500, 10000);
-  }
-
-  @Test(timeout=30000)
-  public void testSyncForMultipleMissingLogs() throws Exception {
-    File firstJournalDir = jCluster.getJournalDir(0, jid);
-    File firstJournalCurrentDir = new StorageDirectory(firstJournalDir)
-        .getCurrentDir();
-
-    // Generate some edit logs and delete two.
-    long firstTxId = generateEditLog();
-    long nextTxId = generateEditLog();
-
-    List<File> missingLogs = Lists.newArrayList();
-    missingLogs.add(deleteEditLog(firstJournalCurrentDir, firstTxId));
-    missingLogs.add(deleteEditLog(firstJournalCurrentDir, nextTxId));
-
-    GenericTestUtils.waitFor(editLogExists(missingLogs), 500, 10000);
-  }
-
-  @Test(timeout=30000)
-  public void testSyncForDiscontinuousMissingLogs() throws Exception {
-    File firstJournalDir = jCluster.getJournalDir(0, jid);
-    File firstJournalCurrentDir = new StorageDirectory(firstJournalDir)
-        .getCurrentDir();
-
-    // Generate some edit logs and delete two discontinuous logs.
-    long firstTxId = generateEditLog();
-    generateEditLog();
-    long nextTxId = generateEditLog();
-
-    List<File> missingLogs = Lists.newArrayList();
-    missingLogs.add(deleteEditLog(firstJournalCurrentDir, firstTxId));
-    missingLogs.add(deleteEditLog(firstJournalCurrentDir, nextTxId));
-
-    GenericTestUtils.waitFor(editLogExists(missingLogs), 500, 10000);
-  }
-
-  @Test(timeout=30000)
-  public void testMultipleJournalsMissingLogs() throws Exception {
-    File firstJournalDir = jCluster.getJournalDir(0, jid);
-    File firstJournalCurrentDir = new StorageDirectory(firstJournalDir)
-        .getCurrentDir();
-
-    File secondJournalDir = jCluster.getJournalDir(1, jid);
-    StorageDirectory sd = new StorageDirectory(secondJournalDir);
-    File secondJournalCurrentDir = sd.getCurrentDir();
-
-    // Generate some edit logs and delete one log from two journals.
-    long firstTxId = generateEditLog();
-    generateEditLog();
-
-    List<File> missingLogs = Lists.newArrayList();
-    missingLogs.add(deleteEditLog(firstJournalCurrentDir, firstTxId));
-    missingLogs.add(deleteEditLog(secondJournalCurrentDir, firstTxId));
-
-    GenericTestUtils.waitFor(editLogExists(missingLogs), 500, 10000);
-  }
-
-  @Test(timeout=60000)
-  public void testMultipleJournalsMultipleMissingLogs() throws Exception {
-    File firstJournalDir = jCluster.getJournalDir(0, jid);
-    File firstJournalCurrentDir = new StorageDirectory(firstJournalDir)
-        .getCurrentDir();
-
-    File secondJournalDir = jCluster.getJournalDir(1, jid);
-    File secondJournalCurrentDir = new StorageDirectory(secondJournalDir)
-        .getCurrentDir();
-
-    File thirdJournalDir = jCluster.getJournalDir(2, jid);
-    File thirdJournalCurrentDir = new StorageDirectory(thirdJournalDir)
-        .getCurrentDir();
-
-    // Generate some edit logs and delete multiple logs in multiple journals.
-    long firstTxId = generateEditLog();
-    long secondTxId = generateEditLog();
-    long thirdTxId = generateEditLog();
-
-    List<File> missingLogs = Lists.newArrayList();
-    missingLogs.add(deleteEditLog(firstJournalCurrentDir, firstTxId));
-    missingLogs.add(deleteEditLog(secondJournalCurrentDir, firstTxId));
-    missingLogs.add(deleteEditLog(secondJournalCurrentDir, secondTxId));
-    missingLogs.add(deleteEditLog(thirdJournalCurrentDir, thirdTxId));
-
-    GenericTestUtils.waitFor(editLogExists(missingLogs), 500, 30000);
-  }
-
-  // Test JournalNode Sync by randomly deleting edit logs from one or two of
-  // the journals.
-  @Test(timeout=60000)
-  public void testRandomJournalMissingLogs() throws Exception {
-    Random randomJournal = new Random();
-
-    List<File> journalCurrentDirs = Lists.newArrayList();
-
-    for (int i = 0; i < 3; i++) {
-      journalCurrentDirs.add(new StorageDirectory(jCluster.getJournalDir(i,
-          jid)).getCurrentDir());
-    }
-
-    int count = 0;
-    long lastStartTxId;
-    int journalIndex;
-    List<File> missingLogs = Lists.newArrayList();
-    while (count < 5) {
-      lastStartTxId = generateEditLog();
-
-      // Delete the last edit log segment from randomly selected journal node
-      journalIndex = randomJournal.nextInt(3);
-      missingLogs.add(deleteEditLog(journalCurrentDirs.get(journalIndex),
-          lastStartTxId));
-
-      // Delete the last edit log segment from two journals for some logs
-      if (count % 2 == 0) {
-        journalIndex = (journalIndex + 1) % 3;
-        missingLogs.add(deleteEditLog(journalCurrentDirs.get(journalIndex),
-            lastStartTxId));
-      }
-
-      count++;
-    }
-
-    GenericTestUtils.waitFor(editLogExists(missingLogs), 500, 30000);
-  }
-
-  private File deleteEditLog(File currentDir, long startTxId)
-      throws IOException {
-    EditLogFile logFile = getLogFile(currentDir, startTxId);
-    while (logFile.isInProgress()) {
-      dfsCluster.getNameNode(0).getRpcServer().rollEditLog();
-      logFile = getLogFile(currentDir, startTxId);
-    }
-    File deleteFile = logFile.getFile();
-    Assert.assertTrue("Couldn't delete edit log file", deleteFile.delete());
-
-    return deleteFile;
-  }
-
-  /**
-   * Do a mutative metadata operation on the file system.
-   *
-   * @return true if the operation was successful, false otherwise.
-   */
-  private boolean doAnEdit() throws IOException {
-    return fs.mkdirs(new Path("/tmp", Integer.toString(editsPerformed++)));
-  }
-
-  /**
-   * Does an edit and rolls the Edit Log.
-   *
-   * @return the startTxId of next segment after rolling edits.
-   */
-  private long generateEditLog() throws IOException {
-    long startTxId = namesystem.getFSImage().getEditLog().getLastWrittenTxId();
-    Assert.assertTrue("Failed to do an edit", doAnEdit());
-    dfsCluster.getNameNode(0).getRpcServer().rollEditLog();
-    return startTxId;
-  }
-
-  private Supplier<Boolean> editLogExists(List<File> editLogs) {
-    Supplier<Boolean> supplier = new Supplier<Boolean>() {
-      @Override
-      public Boolean get() {
-        for (File editLog : editLogs) {
-          if (!editLog.exists()) {
-            return false;
-          }
-        }
-        return true;
-      }
-    };
-    return supplier;
-  }
-}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/bbc6d254/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/server/TestJournalNode.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/server/TestJournalNode.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/server/TestJournalNode.java
index 9dd6846..28ec708 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/server/TestJournalNode.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/server/TestJournalNode.java
@@ -102,7 +102,7 @@ public class TestJournalNode {
   @Test(timeout=100000)
   public void testJournal() throws Exception {
     MetricsRecordBuilder metrics = MetricsAsserts.getMetrics(
-        journal.getMetricsForTests().getName());
+        journal.getMetrics().getName());
     MetricsAsserts.assertCounter("BatchesWritten", 0L, metrics);
     MetricsAsserts.assertCounter("BatchesWrittenWhileLagging", 0L, metrics);
     MetricsAsserts.assertGauge("CurrentLagTxns", 0L, metrics);
@@ -117,7 +117,7 @@ public class TestJournalNode {
     ch.sendEdits(1L, 1, 1, "hello".getBytes(Charsets.UTF_8)).get();
     
     metrics = MetricsAsserts.getMetrics(
-        journal.getMetricsForTests().getName());
+        journal.getMetrics().getName());
     MetricsAsserts.assertCounter("BatchesWritten", 1L, metrics);
     MetricsAsserts.assertCounter("BatchesWrittenWhileLagging", 0L, metrics);
     MetricsAsserts.assertGauge("CurrentLagTxns", 0L, metrics);
@@ -130,7 +130,7 @@ public class TestJournalNode {
     ch.sendEdits(1L, 2, 1, "goodbye".getBytes(Charsets.UTF_8)).get();
 
     metrics = MetricsAsserts.getMetrics(
-        journal.getMetricsForTests().getName());
+        journal.getMetrics().getName());
     MetricsAsserts.assertCounter("BatchesWritten", 2L, metrics);
     MetricsAsserts.assertCounter("BatchesWrittenWhileLagging", 1L, metrics);
     MetricsAsserts.assertGauge("CurrentLagTxns", 98L, metrics);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/bbc6d254/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/server/TestJournalNodeSync.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/server/TestJournalNodeSync.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/server/TestJournalNodeSync.java
new file mode 100644
index 0000000..2964f05
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/server/TestJournalNodeSync.java
@@ -0,0 +1,439 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.qjournal.server;
+
+import com.google.common.base.Supplier;
+import com.google.common.collect.Lists;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.HdfsConfiguration;
+import org.apache.hadoop.hdfs.MiniDFSCluster;
+import org.apache.hadoop.hdfs.qjournal.MiniJournalCluster;
+import org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster;
+import org.apache.hadoop.hdfs.server.common.Storage.StorageDirectory;
+import org.apache.hadoop.hdfs.server.namenode.FSNamesystem;
+import org.apache.hadoop.hdfs.server.namenode.FileJournalManager.EditLogFile;
+import static org.apache.hadoop.hdfs.server.namenode.FileJournalManager
+    .getLogFile;
+import org.apache.hadoop.hdfs.server.protocol.NamespaceInfo;
+import org.apache.hadoop.test.GenericTestUtils;
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TestName;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.List;
+import java.util.Random;
+
+/**
+ * Unit test for Journal Node formatting upon re-installation and syncing.
+ */
+public class TestJournalNodeSync {
+  private Configuration conf;
+  private MiniQJMHACluster qjmhaCluster;
+  private MiniDFSCluster dfsCluster;
+  private MiniJournalCluster jCluster;
+  private FileSystem fs;
+  private FSNamesystem namesystem;
+  private int editsPerformed = 0;
+  private final String jid = "ns1";
+
+  @Rule
+  public TestName testName = new TestName();
+
+  @Before
+  public void setUpMiniCluster() throws IOException {
+    conf = new HdfsConfiguration();
+    conf.setBoolean(DFSConfigKeys.DFS_JOURNALNODE_ENABLE_SYNC_KEY, true);
+    conf.setLong(DFSConfigKeys.DFS_JOURNALNODE_SYNC_INTERVAL_KEY, 1000L);
+    if (testName.getMethodName().equals(
+        "testSyncAfterJNdowntimeWithoutQJournalQueue")) {
+      conf.setInt(DFSConfigKeys.DFS_QJOURNAL_QUEUE_SIZE_LIMIT_KEY, 0);
+    }
+    qjmhaCluster = new MiniQJMHACluster.Builder(conf).setNumNameNodes(2)
+      .build();
+    dfsCluster = qjmhaCluster.getDfsCluster();
+    jCluster = qjmhaCluster.getJournalCluster();
+
+    dfsCluster.transitionToActive(0);
+    fs = dfsCluster.getFileSystem(0);
+    namesystem = dfsCluster.getNamesystem(0);
+  }
+
+  @After
+  public void shutDownMiniCluster() throws IOException {
+    if (qjmhaCluster != null) {
+      qjmhaCluster.shutdown();
+    }
+  }
+
+  @Test(timeout=30000)
+  public void testJournalNodeSync() throws Exception {
+    File firstJournalDir = jCluster.getJournalDir(0, jid);
+    File firstJournalCurrentDir = new StorageDirectory(firstJournalDir)
+        .getCurrentDir();
+
+    // Generate some edit logs and delete one.
+    long firstTxId = generateEditLog();
+    generateEditLog();
+
+    File missingLog = deleteEditLog(firstJournalCurrentDir, firstTxId);
+
+    GenericTestUtils.waitFor(editLogExists(Lists.newArrayList(missingLog)),
+        500, 10000);
+  }
+
+  @Test(timeout=30000)
+  public void testSyncForMultipleMissingLogs() throws Exception {
+    File firstJournalDir = jCluster.getJournalDir(0, jid);
+    File firstJournalCurrentDir = new StorageDirectory(firstJournalDir)
+        .getCurrentDir();
+
+    // Generate some edit logs and delete two.
+    long firstTxId = generateEditLog();
+    long nextTxId = generateEditLog();
+
+    List<File> missingLogs = Lists.newArrayList();
+    missingLogs.add(deleteEditLog(firstJournalCurrentDir, firstTxId));
+    missingLogs.add(deleteEditLog(firstJournalCurrentDir, nextTxId));
+
+    GenericTestUtils.waitFor(editLogExists(missingLogs), 500, 10000);
+  }
+
+  @Test(timeout=30000)
+  public void testSyncForDiscontinuousMissingLogs() throws Exception {
+    File firstJournalDir = jCluster.getJournalDir(0, jid);
+    File firstJournalCurrentDir = new StorageDirectory(firstJournalDir)
+        .getCurrentDir();
+
+    // Generate some edit logs and delete two discontinuous logs.
+    long firstTxId = generateEditLog();
+    generateEditLog();
+    long nextTxId = generateEditLog();
+
+    List<File> missingLogs = Lists.newArrayList();
+    missingLogs.add(deleteEditLog(firstJournalCurrentDir, firstTxId));
+    missingLogs.add(deleteEditLog(firstJournalCurrentDir, nextTxId));
+
+    GenericTestUtils.waitFor(editLogExists(missingLogs), 500, 10000);
+  }
+
+  @Test(timeout=30000)
+  public void testMultipleJournalsMissingLogs() throws Exception {
+    File firstJournalDir = jCluster.getJournalDir(0, jid);
+    File firstJournalCurrentDir = new StorageDirectory(firstJournalDir)
+        .getCurrentDir();
+
+    File secondJournalDir = jCluster.getJournalDir(1, jid);
+    StorageDirectory sd = new StorageDirectory(secondJournalDir);
+    File secondJournalCurrentDir = sd.getCurrentDir();
+
+    // Generate some edit logs and delete one log from two journals.
+    long firstTxId = generateEditLog();
+    generateEditLog();
+
+    List<File> missingLogs = Lists.newArrayList();
+    missingLogs.add(deleteEditLog(firstJournalCurrentDir, firstTxId));
+    missingLogs.add(deleteEditLog(secondJournalCurrentDir, firstTxId));
+
+    GenericTestUtils.waitFor(editLogExists(missingLogs), 500, 10000);
+  }
+
+  @Test(timeout=60000)
+  public void testMultipleJournalsMultipleMissingLogs() throws Exception {
+    File firstJournalDir = jCluster.getJournalDir(0, jid);
+    File firstJournalCurrentDir = new StorageDirectory(firstJournalDir)
+        .getCurrentDir();
+
+    File secondJournalDir = jCluster.getJournalDir(1, jid);
+    File secondJournalCurrentDir = new StorageDirectory(secondJournalDir)
+        .getCurrentDir();
+
+    File thirdJournalDir = jCluster.getJournalDir(2, jid);
+    File thirdJournalCurrentDir = new StorageDirectory(thirdJournalDir)
+        .getCurrentDir();
+
+    // Generate some edit logs and delete multiple logs in multiple journals.
+    long firstTxId = generateEditLog();
+    long secondTxId = generateEditLog();
+    long thirdTxId = generateEditLog();
+
+    List<File> missingLogs = Lists.newArrayList();
+    missingLogs.add(deleteEditLog(firstJournalCurrentDir, firstTxId));
+    missingLogs.add(deleteEditLog(secondJournalCurrentDir, firstTxId));
+    missingLogs.add(deleteEditLog(secondJournalCurrentDir, secondTxId));
+    missingLogs.add(deleteEditLog(thirdJournalCurrentDir, thirdTxId));
+
+    GenericTestUtils.waitFor(editLogExists(missingLogs), 500, 30000);
+  }
+
+  // Test JournalNode Sync by randomly deleting edit logs from one or two of
+  // the journals.
+  @Test(timeout=60000)
+  public void testRandomJournalMissingLogs() throws Exception {
+    Random randomJournal = new Random();
+
+    List<File> journalCurrentDirs = Lists.newArrayList();
+
+    for (int i = 0; i < 3; i++) {
+      journalCurrentDirs.add(new StorageDirectory(jCluster.getJournalDir(i,
+          jid)).getCurrentDir());
+    }
+
+    int count = 0;
+    long lastStartTxId;
+    int journalIndex;
+    List<File> missingLogs = Lists.newArrayList();
+    while (count < 5) {
+      lastStartTxId = generateEditLog();
+
+      // Delete the last edit log segment from randomly selected journal node
+      journalIndex = randomJournal.nextInt(3);
+      missingLogs.add(deleteEditLog(journalCurrentDirs.get(journalIndex),
+          lastStartTxId));
+
+      // Delete the last edit log segment from two journals for some logs
+      if (count % 2 == 0) {
+        journalIndex = (journalIndex + 1) % 3;
+        missingLogs.add(deleteEditLog(journalCurrentDirs.get(journalIndex),
+            lastStartTxId));
+      }
+
+      count++;
+    }
+
+    GenericTestUtils.waitFor(editLogExists(missingLogs), 500, 30000);
+  }
+
+  // Test JournalNode Sync when a JN id down while NN is actively writing
+  // logs and comes back up after some time.
+  @Test (timeout=300_000)
+  public void testSyncAfterJNdowntime() throws Exception {
+    File firstJournalDir = jCluster.getJournalDir(0, jid);
+    File firstJournalCurrentDir = new StorageDirectory(firstJournalDir)
+        .getCurrentDir();
+    File secondJournalDir = jCluster.getJournalDir(1, jid);
+    File secondJournalCurrentDir = new StorageDirectory(secondJournalDir)
+        .getCurrentDir();
+
+    long[] startTxIds = new long[10];
+
+    startTxIds[0] = generateEditLog();
+    startTxIds[1] = generateEditLog();
+
+    // Stop the first JN
+    jCluster.getJournalNode(0).stop(0);
+
+    // Roll some more edits while the first JN is down
+    for (int i = 2; i < 10; i++) {
+      startTxIds[i] = generateEditLog(5);
+    }
+
+    // Re-start the first JN
+    jCluster.restartJournalNode(0);
+
+    // Roll an edit to update the committed tx id of the first JN
+    generateEditLog();
+
+    // List the edit logs rolled during JN down time.
+    List<File> missingLogs = Lists.newArrayList();
+    for (int i = 2; i < 10; i++) {
+      EditLogFile logFile = getLogFile(secondJournalCurrentDir, startTxIds[i],
+          false);
+      missingLogs.add(new File(firstJournalCurrentDir,
+          logFile.getFile().getName()));
+    }
+
+    // Check that JNSync downloaded the edit logs rolled during JN down time.
+    GenericTestUtils.waitFor(editLogExists(missingLogs), 500, 30000);
+  }
+
+  /**
+   * Test JournalNode Sync when a JN id down while NN is actively writing
+   * logs and comes back up after some time with no edit log queueing.
+   * Queuing disabled during the cluster setup {@link #setUpMiniCluster()}
+   * @throws Exception
+   */
+  @Test (timeout=300_000)
+  public void testSyncAfterJNdowntimeWithoutQJournalQueue() throws Exception{
+    // Queuing is disabled during the cluster setup {@link #setUpMiniCluster()}
+    File firstJournalDir = jCluster.getJournalDir(0, jid);
+    File firstJournalCurrentDir = new StorageDirectory(firstJournalDir)
+        .getCurrentDir();
+    File secondJournalDir = jCluster.getJournalDir(1, jid);
+    File secondJournalCurrentDir = new StorageDirectory(secondJournalDir)
+        .getCurrentDir();
+
+    long[] startTxIds = new long[10];
+
+    startTxIds[0] = generateEditLog();
+    startTxIds[1] = generateEditLog(2);
+
+    // Stop the first JN
+    jCluster.getJournalNode(0).stop(0);
+
+    // Roll some more edits while the first JN is down
+    for (int i = 2; i < 10; i++) {
+      startTxIds[i] = generateEditLog(5);
+    }
+
+    // Re-start the first JN
+    jCluster.restartJournalNode(0);
+
+    // After JN restart and before rolling another edit, the missing edit
+    // logs will not by synced as the committed tx id of the JN will be
+    // less than the start tx id's of the missing edit logs and edit log queuing
+    // has been disabled.
+    // Roll an edit to update the committed tx id of the first JN
+    generateEditLog(2);
+
+    // List the edit logs rolled during JN down time.
+    List<File> missingLogs = Lists.newArrayList();
+    for (int i = 2; i < 10; i++) {
+      EditLogFile logFile = getLogFile(secondJournalCurrentDir, startTxIds[i],
+          false);
+      missingLogs.add(new File(firstJournalCurrentDir,
+          logFile.getFile().getName()));
+    }
+
+    // Check that JNSync downloaded the edit logs rolled during JN down time.
+    GenericTestUtils.waitFor(editLogExists(missingLogs), 500, 30000);
+
+    // Check that all the missing edit logs have been downloaded via
+    // JournalNodeSyncer alone (as the edit log queueing has been disabled)
+    long numEditLogsSynced = jCluster.getJournalNode(0).getOrCreateJournal(jid)
+        .getMetrics().getNumEditLogsSynced().value();
+    Assert.assertTrue("Edit logs downloaded outside syncer. Expected 8 or " +
+            "more downloads, got " + numEditLogsSynced + " downloads instead",
+        numEditLogsSynced >= 8);
+  }
+
+  // Test JournalNode Sync when a JN is formatted while NN is actively writing
+  // logs.
+  @Test (timeout=300_000)
+  public void testSyncAfterJNformat() throws Exception{
+    File firstJournalDir = jCluster.getJournalDir(0, jid);
+    File firstJournalCurrentDir = new StorageDirectory(firstJournalDir)
+        .getCurrentDir();
+    File secondJournalDir = jCluster.getJournalDir(1, jid);
+    File secondJournalCurrentDir = new StorageDirectory(secondJournalDir)
+        .getCurrentDir();
+
+    long[] startTxIds = new long[10];
+
+    startTxIds[0] = generateEditLog(1);
+    startTxIds[1] = generateEditLog(2);
+    startTxIds[2] = generateEditLog(4);
+    startTxIds[3] = generateEditLog(6);
+
+    Journal journal1 = jCluster.getJournalNode(0).getOrCreateJournal(jid);
+    NamespaceInfo nsInfo = journal1.getStorage().getNamespaceInfo();
+
+    // Delete contents of current directory of one JN
+    for (File file : firstJournalCurrentDir.listFiles()) {
+      file.delete();
+    }
+
+    // Format the JN
+    journal1.format(nsInfo);
+
+    // Roll some more edits
+    for (int i = 4; i < 10; i++) {
+      startTxIds[i] = generateEditLog(5);
+    }
+
+    // List the edit logs rolled during JN down time.
+    List<File> missingLogs = Lists.newArrayList();
+    for (int i = 0; i < 10; i++) {
+      EditLogFile logFile = getLogFile(secondJournalCurrentDir, startTxIds[i],
+          false);
+      missingLogs.add(new File(firstJournalCurrentDir,
+          logFile.getFile().getName()));
+    }
+
+    // Check that the formatted JN has all the edit logs.
+    GenericTestUtils.waitFor(editLogExists(missingLogs), 500, 30000);
+  }
+
+  private File deleteEditLog(File currentDir, long startTxId)
+      throws IOException {
+    EditLogFile logFile = getLogFile(currentDir, startTxId);
+    while (logFile.isInProgress()) {
+      dfsCluster.getNameNode(0).getRpcServer().rollEditLog();
+      logFile = getLogFile(currentDir, startTxId);
+    }
+    File deleteFile = logFile.getFile();
+    Assert.assertTrue("Couldn't delete edit log file", deleteFile.delete());
+
+    return deleteFile;
+  }
+
+  /**
+   * Do a mutative metadata operation on the file system.
+   *
+   * @return true if the operation was successful, false otherwise.
+   */
+  private boolean doAnEdit() throws IOException {
+    return fs.mkdirs(new Path("/tmp", Integer.toString(editsPerformed++)));
+  }
+
+  /**
+   * Does an edit and rolls the Edit Log.
+   *
+   * @return the startTxId of next segment after rolling edits.
+   */
+  private long generateEditLog() throws IOException {
+    return generateEditLog(1);
+  }
+
+  /**
+   * Does specified number of edits and rolls the Edit Log.
+   *
+   * @param numEdits number of Edits to perform
+   * @return the startTxId of next segment after rolling edits.
+   */
+  private long generateEditLog(int numEdits) throws IOException {
+    long startTxId = namesystem.getFSImage().getEditLog().getLastWrittenTxId();
+    for (int i = 1; i <= numEdits; i++) {
+      Assert.assertTrue("Failed to do an edit", doAnEdit());
+    }
+    dfsCluster.getNameNode(0).getRpcServer().rollEditLog();
+    return startTxId;
+  }
+
+  private Supplier<Boolean> editLogExists(List<File> editLogs) {
+    Supplier<Boolean> supplier = new Supplier<Boolean>() {
+      @Override
+      public Boolean get() {
+        for (File editLog : editLogs) {
+          if (!editLog.exists()) {
+            return false;
+          }
+        }
+        return true;
+      }
+    };
+    return supplier;
+  }
+}


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[37/50] [abbrv] hadoop git commit: YARN-6033. Add support for sections in container-executor configuration file. (Varun Vasudev via wandga)

Posted by wa...@apache.org.
YARN-6033. Add support for sections in container-executor configuration file. (Varun Vasudev via wandga)

Change-Id: Ibc6d2a959debe5d8ff2b51504149742449d1f1da


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ec694145
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ec694145
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ec694145

Branch: refs/heads/YARN-5881
Commit: ec694145cf9c0ade7606813871ca2a4a371def8e
Parents: 63cfcb9
Author: Wangda Tan <wa...@apache.org>
Authored: Wed Aug 9 10:51:29 2017 -0700
Committer: Wangda Tan <wa...@apache.org>
Committed: Wed Aug 9 10:51:29 2017 -0700

----------------------------------------------------------------------
 .../hadoop-yarn-server-nodemanager/pom.xml      |  38 ++
 .../src/CMakeLists.txt                          |  22 +
 .../container-executor/impl/configuration.c     | 672 +++++++++++++------
 .../container-executor/impl/configuration.h     | 182 +++--
 .../impl/container-executor.c                   |  39 +-
 .../impl/container-executor.h                   |  52 +-
 .../container-executor/impl/get_executable.c    |   1 +
 .../main/native/container-executor/impl/main.c  |  17 +-
 .../main/native/container-executor/impl/util.c  | 134 ++++
 .../main/native/container-executor/impl/util.h  | 115 ++++
 .../test-configurations/configuration-1.cfg     |  31 +
 .../test-configurations/configuration-2.cfg     |  28 +
 .../test/test-configurations/old-config.cfg     |  25 +
 .../test/test-container-executor.c              |  15 +-
 .../test/test_configuration.cc                  | 432 ++++++++++++
 .../native/container-executor/test/test_main.cc |  29 +
 .../native/container-executor/test/test_util.cc | 138 ++++
 17 files changed, 1649 insertions(+), 321 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ec694145/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml
index 28ee0d9..a50a769 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml
@@ -215,6 +215,44 @@
                   <results>${project.build.directory}/native-results</results>
                 </configuration>
               </execution>
+              <execution>
+                <id>cetest</id>
+                <goals><goal>cmake-test</goal></goals>
+                <phase>test</phase>
+                <configuration>
+                  <!-- this should match the xml name without the TEST-part down below -->
+                  <testName>cetest</testName>
+                  <workingDirectory>${project.build.directory}/native/test</workingDirectory>
+                  <source>${basedir}/src</source>
+                  <binary>${project.build.directory}/native/test/cetest</binary>
+                  <args>
+                    <arg>--gtest_filter=-Perf.</arg>
+                    <arg>--gtest_output=xml:${project.build.directory}/surefire-reports/TEST-cetest.xml</arg>
+                  </args>
+                  <results>${project.build.directory}/surefire-reports</results>
+                </configuration>
+              </execution>
+            </executions>
+          </plugin>
+          <plugin>
+            <groupId>org.apache.maven.plugins</groupId>
+            <artifactId>maven-antrun-plugin</artifactId>
+            <executions>
+              <execution>
+                <id>make</id>
+                <phase>compile</phase>
+                <goals>
+                  <goal>run</goal>
+                </goals>
+                <configuration>
+                  <target>
+                    <copy todir="${project.build.directory}/native/test/"
+                      overwrite="true">
+                      <fileset dir="${basedir}/src/main/native/container-executor/resources/test" />
+                    </copy>
+                  </target>
+                </configuration>
+              </execution>
             </executions>
           </plugin>
         </plugins>

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ec694145/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/CMakeLists.txt
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/CMakeLists.txt b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/CMakeLists.txt
index 5b52536..100d7ca 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/CMakeLists.txt
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/CMakeLists.txt
@@ -19,6 +19,9 @@ cmake_minimum_required(VERSION 2.6 FATAL_ERROR)
 list(APPEND CMAKE_MODULE_PATH ${CMAKE_SOURCE_DIR}/../../../../../hadoop-common-project/hadoop-common)
 include(HadoopCommon)
 
+# Set gtest path
+set(GTEST_SRC_DIR ${CMAKE_SOURCE_DIR}/../../../../../hadoop-common-project/hadoop-common/src/main/native/gtest)
+
 # determine if container-executor.conf.dir is an absolute
 # path in case the OS we're compiling on doesn't have
 # a hook in get_executable. We'll use this define
@@ -80,12 +83,20 @@ endfunction()
 include_directories(
     ${CMAKE_CURRENT_SOURCE_DIR}
     ${CMAKE_BINARY_DIR}
+    ${GTEST_SRC_DIR}/include
     main/native/container-executor
     main/native/container-executor/impl
 )
+# add gtest as system library to suppress gcc warnings
+include_directories(SYSTEM ${GTEST_SRC_DIR}/include)
+
 configure_file(${CMAKE_SOURCE_DIR}/config.h.cmake ${CMAKE_BINARY_DIR}/config.h)
 
+add_library(gtest ${GTEST_SRC_DIR}/gtest-all.cc)
+set_target_properties(gtest PROPERTIES COMPILE_FLAGS "-w")
+
 add_library(container
+    main/native/container-executor/impl/util.c
     main/native/container-executor/impl/configuration.c
     main/native/container-executor/impl/container-executor.c
     main/native/container-executor/impl/get_executable.c
@@ -95,9 +106,11 @@ add_library(container
 add_executable(container-executor
     main/native/container-executor/impl/main.c
 )
+
 target_link_libraries(container-executor
     container
 )
+
 output_directory(container-executor target/usr/local/bin)
 
 add_executable(test-container-executor
@@ -107,3 +120,12 @@ target_link_libraries(test-container-executor
     container ${EXTRA_LIBS}
 )
 output_directory(test-container-executor target/usr/local/bin)
+
+# unit tests for container executor
+add_executable(cetest
+        main/native/container-executor/impl/util.c
+        main/native/container-executor/test/test_configuration.cc
+        main/native/container-executor/test/test_main.cc
+        main/native/container-executor/test/test_util.cc)
+target_link_libraries(cetest gtest)
+output_directory(cetest test)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ec694145/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/configuration.c
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/configuration.c b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/configuration.c
index a6d7a9c..12dbc4c 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/configuration.c
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/configuration.c
@@ -20,35 +20,55 @@
 #include <libgen.h>
 
 #include "configuration.h"
-#include "container-executor.h"
+#include "util.h"
 
+#define __STDC_FORMAT_MACROS
 #include <inttypes.h>
 #include <errno.h>
 #include <unistd.h>
-#include <stdio.h>
 #include <stdlib.h>
 #include <string.h>
 #include <sys/stat.h>
-#include <sys/types.h>
-#include <limits.h>
-#include <ctype.h>
 
 #define MAX_SIZE 10
 
+static const char COMMENT_BEGIN_CHAR = '#';
+static const char SECTION_LINE_BEGIN_CHAR = '[';
+static const char SECTION_LINE_END_CHAR = ']';
+
+//clean up method for freeing section
+void free_section(struct section *section) {
+  int i = 0;
+  for (i = 0; i < section->size; i++) {
+    if (section->kv_pairs[i]->key != NULL) {
+      free((void *) section->kv_pairs[i]->key);
+    }
+    if (section->kv_pairs[i]->value != NULL) {
+      free((void *) section->kv_pairs[i]->value);
+    }
+    free(section->kv_pairs[i]);
+  }
+  if (section->kv_pairs) {
+    free(section->kv_pairs);
+    section->kv_pairs = NULL;
+  }
+  if (section->name) {
+    free(section->name);
+    section->name = NULL;
+  }
+  section->size = 0;
+}
+
 //clean up method for freeing configuration
-void free_configurations(struct configuration *cfg) {
+void free_configuration(struct configuration *cfg) {
   int i = 0;
   for (i = 0; i < cfg->size; i++) {
-    if (cfg->confdetails[i]->key != NULL) {
-      free((void *)cfg->confdetails[i]->key);
+    if (cfg->sections[i] != NULL) {
+      free_section(cfg->sections[i]);
     }
-    if (cfg->confdetails[i]->value != NULL) {
-      free((void *)cfg->confdetails[i]->value);
-    }
-    free(cfg->confdetails[i]);
   }
-  if (cfg->size > 0) {
-    free(cfg->confdetails);
+  if (cfg->sections) {
+    free(cfg->sections);
   }
   cfg->size = 0;
 }
@@ -65,13 +85,13 @@ static int is_only_root_writable(const char *file) {
   }
   if (file_stat.st_uid != 0) {
     fprintf(ERRORFILE, "File %s must be owned by root, but is owned by %" PRId64 "\n",
-            file, (int64_t)file_stat.st_uid);
+            file, (int64_t) file_stat.st_uid);
     return 0;
   }
   if ((file_stat.st_mode & (S_IWGRP | S_IWOTH)) != 0) {
     fprintf(ERRORFILE,
-	    "File %s must not be world or group writable, but is %03lo\n",
-	    file, (unsigned long)file_stat.st_mode & (~S_IFMT));
+            "File %s must not be world or group writable, but is %03lo\n",
+            file, (unsigned long) file_stat.st_mode & (~S_IFMT));
     return 0;
   }
   return 1;
@@ -82,9 +102,9 @@ static int is_only_root_writable(const char *file) {
  *
  * NOTE: relative path names are resolved relative to the second argument not getwd(3)
  */
-char *resolve_config_path(const char* file_name, const char *root) {
+char *resolve_config_path(const char *file_name, const char *root) {
   const char *real_fname = NULL;
-  char buffer[EXECUTOR_PATH_MAX*2 + 1];
+  char buffer[EXECUTOR_PATH_MAX * 2 + 1];
 
   if (file_name[0] == '/') {
     real_fname = file_name;
@@ -96,7 +116,7 @@ char *resolve_config_path(const char* file_name, const char *root) {
 #ifdef HAVE_CANONICALIZE_FILE_NAME
   char * ret = (real_fname == NULL) ? NULL : canonicalize_file_name(real_fname);
 #else
-  char * ret = (real_fname == NULL) ? NULL : realpath(real_fname, NULL);
+  char *ret = (real_fname == NULL) ? NULL : realpath(real_fname, NULL);
 #endif
 #ifdef DEBUG
   fprintf(stderr,"ret = %s\n", ret);
@@ -112,10 +132,19 @@ char *resolve_config_path(const char* file_name, const char *root) {
  * configuration and potentially cause damage.
  * returns 0 if permissions are ok
  */
-int check_configuration_permissions(const char* file_name) {
+int check_configuration_permissions(const char *file_name) {
+  if (!file_name) {
+    return -1;
+  }
+
   // copy the input so that we can modify it with dirname
-  char* dir = strdup(file_name);
-  char* buffer = dir;
+  char *dir = strdup(file_name);
+  if (!dir) {
+    fprintf(stderr, "Failed to make a copy of filename in %s.\n", __func__);
+    return -1;
+  }
+
+  char *buffer = dir;
   do {
     if (!is_only_root_writable(dir)) {
       free(buffer);
@@ -128,167 +157,396 @@ int check_configuration_permissions(const char* file_name) {
 }
 
 /**
- * Trim whitespace from beginning and end.
-*/
-char* trim(char* input)
-{
-    char *val_begin;
-    char *val_end;
-    char *ret;
-
-    if (input == NULL) {
-      return NULL;
+ * Read a line from the the config file and return it without the newline.
+ * The caller must free the memory allocated.
+ */
+static char *read_config_line(FILE *conf_file) {
+  char *line = NULL;
+  size_t linesize = 100000;
+  ssize_t size_read = 0;
+  size_t eol = 0;
+
+  line = (char *) malloc(linesize);
+  if (line == NULL) {
+    fprintf(ERRORFILE, "malloc failed while reading configuration file.\n");
+    exit(OUT_OF_MEMORY);
+  }
+  size_read = getline(&line, &linesize, conf_file);
+
+  //feof returns true only after we read past EOF.
+  //so a file with no new line, at last can reach this place
+  //if size_read returns negative check for eof condition
+  if (size_read == -1) {
+    free(line);
+    line = NULL;
+    if (!feof(conf_file)) {
+      fprintf(ERRORFILE, "Line read returned -1 without eof\n");
+      exit(INVALID_CONFIG_FILE);
+    }
+  } else {
+    eol = strlen(line) - 1;
+    if (line[eol] == '\n') {
+      //trim the ending new line
+      line[eol] = '\0';
     }
+  }
+  return line;
+}
 
-    val_begin = input;
-    val_end = input + strlen(input);
+/**
+ * Return if the given line is a comment line.
+ *
+ * @param line the line to check
+ *
+ * @return 1 if the line is a comment line, 0 otherwise
+ */
+static int is_comment_line(const char *line) {
+  if (line != NULL) {
+    return (line[0] == COMMENT_BEGIN_CHAR);
+  }
+  return 0;
+}
 
-    while (val_begin < val_end && isspace(*val_begin))
-      val_begin++;
-    while (val_end > val_begin && isspace(*(val_end - 1)))
-      val_end--;
+/**
+ * Return if the given line is a section start line.
+ *
+ * @param line the line to check
+ *
+ * @return 1 if the line is a section start line, 0 otherwise
+ */
+static int is_section_start_line(const char *line) {
+  size_t len = 0;
+  if (line != NULL) {
+    len = strlen(line) - 1;
+    return (line[0] == SECTION_LINE_BEGIN_CHAR
+            && line[len] == SECTION_LINE_END_CHAR);
+  }
+  return 0;
+}
 
-    ret = (char *) malloc(
-            sizeof(char) * (val_end - val_begin + 1));
-    if (ret == NULL) {
-      fprintf(ERRORFILE, "Allocation error\n");
+/**
+ * Return the name of the section from the given section start line. The
+ * caller must free the memory used.
+ *
+ * @param line the line to extract the section name from
+ *
+ * @return string with the name of the section, NULL otherwise
+ */
+static char *get_section_name(const char *line) {
+  char *name = NULL;
+  size_t len;
+
+  if (is_section_start_line(line)) {
+    // length of the name is the line - 2(to account for '[' and ']')
+    len = strlen(line) - 2;
+    name = (char *) malloc(len + 1);
+    if (name == NULL) {
+      fprintf(ERRORFILE, "malloc failed while reading section name.\n");
       exit(OUT_OF_MEMORY);
     }
-
-    strncpy(ret, val_begin, val_end - val_begin);
-    ret[val_end - val_begin] = '\0';
-    return ret;
+    strncpy(name, line + sizeof(char), len);
+    name[len] = '\0';
+  }
+  return name;
 }
 
-void read_config(const char* file_name, struct configuration *cfg) {
-  FILE *conf_file;
-  char *line;
+/**
+ * Read an entry for the section from the line. Function returns 0 if an entry
+ * was found, non-zero otherwise. Return values less than 0 indicate an error
+ * with the config file.
+ *
+ * @param line the line to read the entry from
+ * @param section the struct to read the entry into
+ *
+ * @return 0 if an entry was found
+ *         <0 for config file errors
+ *         >0 for issues such as empty line
+ *
+ */
+static int read_section_entry(const char *line, struct section *section) {
   char *equaltok;
   char *temp_equaltok;
-  size_t linesize = 1000;
-  int size_read = 0;
-
-  if (file_name == NULL) {
-    fprintf(ERRORFILE, "Null configuration filename passed in\n");
-    exit(INVALID_CONFIG_FILE);
+  const char *splitter = "=";
+  char *buffer;
+  size_t len = 0;
+  if (line == NULL || section == NULL) {
+    fprintf(ERRORFILE, "NULL params passed to read_section_entry");
+    return -1;
+  }
+  len = strlen(line);
+  if (len == 0) {
+    return 1;
+  }
+  if ((section->size) % MAX_SIZE == 0) {
+    section->kv_pairs = (struct kv_pair **) realloc(
+        section->kv_pairs,
+        sizeof(struct kv_pair *) * (MAX_SIZE + section->size));
+    if (section->kv_pairs == NULL) {
+      fprintf(ERRORFILE,
+              "Failed re-allocating memory for configuration items\n");
+      exit(OUT_OF_MEMORY);
+    }
   }
 
-  #ifdef DEBUG
-    fprintf(LOGFILE, "read_config :Conf file name is : %s \n", file_name);
-  #endif
+  buffer = strdup(line);
+  if (!buffer) {
+    fprintf(ERRORFILE, "Failed to allocating memory for line, %s\n", __func__);
+    exit(OUT_OF_MEMORY);
+  }
 
-  //allocate space for ten configuration items.
-  cfg->confdetails = (struct confentry **) malloc(sizeof(struct confentry *)
-      * MAX_SIZE);
-  cfg->size = 0;
-  conf_file = fopen(file_name, "r");
-  if (conf_file == NULL) {
-    fprintf(ERRORFILE, "Invalid conf file provided : %s \n", file_name);
+  //tokenize first to get key and list of values.
+  //if no equals is found ignore this line, can be an empty line also
+  equaltok = strtok_r(buffer, splitter, &temp_equaltok);
+  if (equaltok == NULL) {
+    fprintf(ERRORFILE, "Error with line '%s', no '=' found\n", buffer);
     exit(INVALID_CONFIG_FILE);
   }
-  while(!feof(conf_file)) {
-    line = (char *) malloc(linesize);
-    if(line == NULL) {
-      fprintf(ERRORFILE, "malloc failed while reading configuration file.\n");
-      exit(OUT_OF_MEMORY);
+  section->kv_pairs[section->size] = (struct kv_pair *) malloc(
+      sizeof(struct kv_pair));
+  if (section->kv_pairs[section->size] == NULL) {
+    fprintf(ERRORFILE, "Failed allocating memory for single section item\n");
+    exit(OUT_OF_MEMORY);
+  }
+  memset(section->kv_pairs[section->size], 0,
+         sizeof(struct kv_pair));
+  section->kv_pairs[section->size]->key = trim(equaltok);
+
+  equaltok = strtok_r(NULL, splitter, &temp_equaltok);
+  if (equaltok == NULL) {
+    // this can happen because no value was set
+    // e.g. banned.users=#this is a comment
+    int has_values = 1;
+    if (strstr(line, splitter) == NULL) {
+      fprintf(ERRORFILE, "configuration tokenization failed, error with line %s\n", line);
+      has_values = 0;
     }
-    size_read = getline(&line,&linesize,conf_file);
 
-    //feof returns true only after we read past EOF.
-    //so a file with no new line, at last can reach this place
-    //if size_read returns negative check for eof condition
-    if (size_read == -1) {
-      free(line);
-      if(!feof(conf_file)){
-        exit(INVALID_CONFIG_FILE);
-      } else {
-        break;
-      }
-    }
-    int eol = strlen(line) - 1;
-    if(line[eol] == '\n') {
-        //trim the ending new line
-        line[eol] = '\0';
+    // It is not a valid line, free memory.
+    free((void *) section->kv_pairs[section->size]->key);
+    free((void *) section->kv_pairs[section->size]);
+    section->kv_pairs[section->size] = NULL;
+    free(buffer);
+
+    // Return -1 when no values
+    if (!has_values) {
+      return -1;
     }
-    //comment line
-    if(line[0] == '#') {
-      free(line);
-      continue;
+
+    // Return 2 for comments
+    return 2;
+  }
+
+#ifdef DEBUG
+  fprintf(LOGFILE, "read_config : Adding conf value : %s \n", equaltok);
+#endif
+
+  section->kv_pairs[section->size]->value = trim(equaltok);
+  section->size++;
+  free(buffer);
+  return 0;
+}
+
+/**
+ * Remove any trailing comment from the supplied line. Function modifies the
+ * argument provided.
+ *
+ * @param line the line from which to remove the comment
+ */
+static void trim_comment(char *line) {
+  char *begin_comment = NULL;
+  if (line != NULL) {
+    begin_comment = strchr(line, COMMENT_BEGIN_CHAR);
+    if (begin_comment != NULL) {
+      *begin_comment = '\0';
     }
-    //tokenize first to get key and list of values.
-    //if no equals is found ignore this line, can be an empty line also
-    equaltok = strtok_r(line, "=", &temp_equaltok);
-    if(equaltok == NULL) {
+  }
+}
+
+/**
+ * Allocate a section struct and initialize it. The memory must be freed by
+ * the caller. Function calls exit if any error occurs.
+ *
+ * @return pointer to the allocated section struct
+ *
+ */
+static struct section *allocate_section() {
+  struct section *section = (struct section *) malloc(sizeof(struct section));
+  if (section == NULL) {
+    fprintf(ERRORFILE, "malloc failed while allocating section.\n");
+    exit(OUT_OF_MEMORY);
+  }
+  section->name = NULL;
+  section->kv_pairs = NULL;
+  section->size = 0;
+  return section;
+}
+
+/**
+ * Populate the given section struct with fields from the config file.
+ *
+ * @param conf_file the file to read from
+ * @param section pointer to the section struct to populate
+ *
+ */
+static void populate_section_fields(FILE *conf_file, struct section *section) {
+  char *line;
+  long int offset = 0;
+  while (!feof(conf_file)) {
+    offset = ftell(conf_file);
+    line = read_config_line(conf_file);
+    if (line != NULL) {
+      if (!is_comment_line(line)) {
+        trim_comment(line);
+        if (!is_section_start_line(line)) {
+          if (section->name != NULL) {
+            if (read_section_entry(line, section) < 0) {
+              fprintf(ERRORFILE, "Error parsing line %s", line);
+              exit(INVALID_CONFIG_FILE);
+            }
+          } else {
+            fprintf(ERRORFILE, "Line '%s' doesn't belong to a section\n",
+                    line);
+            exit(INVALID_CONFIG_FILE);
+          }
+        } else {
+          if (section->name == NULL) {
+            section->name = get_section_name(line);
+            if (strlen(section->name) == 0) {
+              fprintf(ERRORFILE, "Empty section name");
+              exit(INVALID_CONFIG_FILE);
+            }
+          } else {
+            // we've reached the next section
+            fseek(conf_file, offset, SEEK_SET);
+            free(line);
+            return;
+          }
+        }
+      }
       free(line);
-      continue;
-    }
-    cfg->confdetails[cfg->size] = (struct confentry *) malloc(
-            sizeof(struct confentry));
-    if(cfg->confdetails[cfg->size] == NULL) {
-      fprintf(LOGFILE,
-          "Failed allocating memory for single configuration item\n");
-      goto cleanup;
     }
+  }
+}
 
-    #ifdef DEBUG
-      fprintf(LOGFILE, "read_config : Adding conf key : %s \n", equaltok);
-    #endif
+/**
+ * Read the section current section from the conf file. Section start is
+ * marked by lines of the form '[section-name]' and continue till the next
+ * section.
+ */
+static struct section *read_section(FILE *conf_file) {
+  struct section *section = allocate_section();
+  populate_section_fields(conf_file, section);
+  if (section->name == NULL) {
+    free_section(section);
+    section = NULL;
+  }
+  return section;
+}
+
+/**
+ * Merge two sections and free the second one after the merge, if desired.
+ * @param section1 the first section
+ * @param section2 the second section
+ * @param free_second_section free the second section if set
+ */
+static void merge_sections(struct section *section1, struct section *section2, const int free_second_section) {
+  int i = 0;
+  section1->kv_pairs = (struct kv_pair **) realloc(
+            section1->kv_pairs,
+            sizeof(struct kv_pair *) * (section1->size + section2->size));
+  if (section1->kv_pairs == NULL) {
+    fprintf(ERRORFILE,
+                "Failed re-allocating memory for configuration items\n");
+    exit(OUT_OF_MEMORY);
+  }
+  for (i = 0; i < section2->size; ++i) {
+    section1->kv_pairs[section1->size + i] = section2->kv_pairs[i];
+  }
+  section1->size += section2->size;
+  if (free_second_section) {
+    free(section2->name);
+    memset(section2, 0, sizeof(*section2));
+    free(section2);
+  }
+}
 
-    memset(cfg->confdetails[cfg->size], 0, sizeof(struct confentry));
-    cfg->confdetails[cfg->size]->key = trim(equaltok);
+int read_config(const char *file_path, struct configuration *cfg) {
+  FILE *conf_file;
 
-    equaltok = strtok_r(NULL, "=", &temp_equaltok);
-    if (equaltok == NULL) {
-      fprintf(LOGFILE, "configuration tokenization failed \n");
-      goto cleanup;
-    }
-    //means value is commented so don't store the key
-    if(equaltok[0] == '#') {
-      free(line);
-      free((void *)cfg->confdetails[cfg->size]->key);
-      free(cfg->confdetails[cfg->size]);
-      continue;
+  if (file_path == NULL) {
+    fprintf(ERRORFILE, "Null configuration filename passed in\n");
+    return INVALID_CONFIG_FILE;
+  }
+
+#ifdef DEBUG
+  fprintf(LOGFILE, "read_config :Conf file name is : %s \n", file_path);
+#endif
+
+  cfg->size = 0;
+  conf_file = fopen(file_path, "r");
+  if (conf_file == NULL) {
+    fprintf(ERRORFILE, "Invalid conf file provided, unable to open file"
+        " : %s \n", file_path);
+    return (INVALID_CONFIG_FILE);
+  }
+
+  cfg->sections = (struct section **) malloc(
+        sizeof(struct section *) * MAX_SIZE);
+  if (!cfg->sections) {
+    fprintf(ERRORFILE,
+            "Failed to allocate memory for configuration sections\n");
+    exit(OUT_OF_MEMORY);
+  }
+
+  // populate any entries in the older format(no sections)
+  cfg->sections[cfg->size] = allocate_section();
+  cfg->sections[cfg->size]->name = strdup("");
+  populate_section_fields(conf_file, cfg->sections[cfg->size]);
+  if (cfg->sections[cfg->size]) {
+    if (cfg->sections[cfg->size]->size) {
+      cfg->size++;
+    } else {
+      free_section(cfg->sections[cfg->size]);
     }
+  }
 
-    #ifdef DEBUG
-      fprintf(LOGFILE, "read_config : Adding conf value : %s \n", equaltok);
-    #endif
-
-    cfg->confdetails[cfg->size]->value = trim(equaltok);
-    if((cfg->size + 1) % MAX_SIZE  == 0) {
-      cfg->confdetails = (struct confentry **) realloc(cfg->confdetails,
-          sizeof(struct confentry **) * (MAX_SIZE + cfg->size));
-      if (cfg->confdetails == NULL) {
-        fprintf(LOGFILE,
-            "Failed re-allocating memory for configuration items\n");
-        goto cleanup;
+  // populate entries in the sections format
+  while (!feof(conf_file)) {
+    cfg->sections[cfg->size] = NULL;
+    struct section *new_section = read_section(conf_file);
+    if (new_section != NULL) {
+      struct section *existing_section =
+          get_configuration_section(new_section->name, cfg);
+      if (existing_section != NULL) {
+        merge_sections((struct section *) existing_section, new_section, 1);
+      } else {
+        cfg->sections[cfg->size] = new_section;
       }
     }
-    if(cfg->confdetails[cfg->size]) {
-        cfg->size++;
-    }
 
-    free(line);
+    // Check if we need to expand memory for sections.
+    if (cfg->sections[cfg->size]) {
+      if ((cfg->size + 1) % MAX_SIZE == 0) {
+        cfg->sections = (struct section **) realloc(cfg->sections,
+                           sizeof(struct sections *) * (MAX_SIZE + cfg->size));
+        if (cfg->sections == NULL) {
+          fprintf(ERRORFILE,
+                  "Failed re-allocating memory for configuration items\n");
+          exit(OUT_OF_MEMORY);
+        }
+      }
+      cfg->size++;
+    }
   }
 
-  //close the file
   fclose(conf_file);
 
   if (cfg->size == 0) {
-    fprintf(ERRORFILE, "Invalid configuration provided in %s\n", file_name);
-    exit(INVALID_CONFIG_FILE);
-  }
-
-  //clean up allocated file name
-  return;
-  //free spaces alloced.
-  cleanup:
-  if (line != NULL) {
-    free(line);
+    free_configuration(cfg);
+    fprintf(ERRORFILE, "Invalid configuration provided in %s\n", file_path);
+    return INVALID_CONFIG_FILE;
   }
-  fclose(conf_file);
-  free_configurations(cfg);
-  return;
+  return 0;
 }
 
 /*
@@ -297,11 +555,14 @@ void read_config(const char* file_name, struct configuration *cfg) {
  * array, next time onwards used the populated array.
  *
  */
-char * get_value(const char* key, struct configuration *cfg) {
+char *get_section_value(const char *key, const struct section *section) {
   int count;
-  for (count = 0; count < cfg->size; count++) {
-    if (strcmp(cfg->confdetails[count]->key, key) == 0) {
-      return strdup(cfg->confdetails[count]->value);
+  if (key == NULL || section == NULL) {
+    return NULL;
+  }
+  for (count = 0; count < section->size; count++) {
+    if (strcmp(section->kv_pairs[count]->key, key) == 0) {
+      return strdup(section->kv_pairs[count]->value);
     }
   }
   return NULL;
@@ -311,61 +572,80 @@ char * get_value(const char* key, struct configuration *cfg) {
  * Function to return an array of values for a key.
  * Value delimiter is assumed to be a ','.
  */
-char ** get_values(const char * key, struct configuration *cfg) {
-  char *value = get_value(key, cfg);
-  return extract_values_delim(value, ",");
+char **get_section_values(const char *key, const struct section *cfg) {
+  return get_section_values_delimiter(key, cfg, ",");
 }
 
 /**
  * Function to return an array of values for a key, using the specified
  delimiter.
  */
-char ** get_values_delim(const char * key, struct configuration *cfg,
-    const char *delim) {
-  char *value = get_value(key, cfg);
-  return extract_values_delim(value, delim);
+char **get_section_values_delimiter(const char *key, const struct section *cfg,
+                                    const char *delim) {
+  if (key == NULL || cfg == NULL || delim == NULL) {
+    return NULL;
+  }
+  char *value = get_section_value(key, cfg);
+  char **split_values = split_delimiter(value, delim);
+
+  if (value) {
+    free(value);
+  }
+
+  return split_values;
 }
 
-char ** extract_values_delim(char *value, const char *delim) {
-  char ** toPass = NULL;
-  char *tempTok = NULL;
-  char *tempstr = NULL;
-  int size = 0;
-  int toPassSize = MAX_SIZE;
-  //first allocate any array of 10
-  if(value != NULL) {
-    toPass = (char **) malloc(sizeof(char *) * toPassSize);
-    tempTok = strtok_r((char *)value, delim, &tempstr);
-    while (tempTok != NULL) {
-      toPass[size++] = tempTok;
-      if(size == toPassSize) {
-        toPassSize += MAX_SIZE;
-        toPass = (char **) realloc(toPass,(sizeof(char *) * toPassSize));
-      }
-      tempTok = strtok_r(NULL, delim, &tempstr);
-    }
+char *get_configuration_value(const char *key, const char *section,
+                              const struct configuration *cfg) {
+  const struct section *section_ptr;
+  if (key == NULL || section == NULL || cfg == NULL) {
+    return NULL;
   }
-  if (toPass != NULL) {
-    toPass[size] = NULL;
+  section_ptr = get_configuration_section(section, cfg);
+  if (section_ptr != NULL) {
+    return get_section_value(key, section_ptr);
   }
-  return toPass;
+  return NULL;
 }
 
-/**
- * Extracts array of values from the '%' separated list of values.
- */
-char ** extract_values(char *value) {
-  return extract_values_delim(value, "%");
+char **get_configuration_values(const char *key, const char *section,
+                                const struct configuration *cfg) {
+  const struct section *section_ptr;
+  if (key == NULL || section == NULL || cfg == NULL) {
+    return NULL;
+  }
+  section_ptr = get_configuration_section(section, cfg);
+  if (section_ptr != NULL) {
+    return get_section_values(key, section_ptr);
+  }
+  return NULL;
+}
+
+char **get_configuration_values_delimiter(const char *key, const char *section,
+                                          const struct configuration *cfg, const char *delim) {
+  const struct section *section_ptr;
+  if (key == NULL || section == NULL || cfg == NULL || delim == NULL) {
+    return NULL;
+  }
+  section_ptr = get_configuration_section(section, cfg);
+  if (section_ptr != NULL) {
+    return get_section_values_delimiter(key, section_ptr, delim);
+  }
+  return NULL;
 }
 
-// free an entry set of values
-void free_values(char** values) {
-  if (*values != NULL) {
-    free(*values);
+struct section *get_configuration_section(const char *section,
+                                          const struct configuration *cfg) {
+  int i = 0;
+  if (cfg == NULL || section == NULL) {
+    return NULL;
   }
-  if (values != NULL) {
-    free(values);
+  for (i = 0; i < cfg->size; ++i) {
+    if (strcmp(cfg->sections[i]->name, section) == 0) {
+      return cfg->sections[i];
+    }
   }
+  return NULL;
 }
 
 /**
@@ -376,12 +656,12 @@ int get_kv_key(const char *input, char *out, size_t out_len) {
   if (input == NULL)
     return -EINVAL;
 
-  char *split = strchr(input, '=');
+  const char *split = strchr(input, '=');
 
   if (split == NULL)
     return -EINVAL;
 
-  int key_len = split - input;
+  unsigned long key_len = split - input;
 
   if (out_len < (key_len + 1) || out == NULL)
     return -ENAMETOOLONG;
@@ -400,13 +680,13 @@ int get_kv_value(const char *input, char *out, size_t out_len) {
   if (input == NULL)
     return -EINVAL;
 
-  char *split = strchr(input, '=');
+  const char *split = strchr(input, '=');
 
   if (split == NULL)
     return -EINVAL;
 
   split++; // advance past '=' to the value
-  int val_len = (input + strlen(input)) - split;
+  unsigned long val_len = (input + strlen(input)) - split;
 
   if (out_len < (val_len + 1) || out == NULL)
     return -ENAMETOOLONG;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ec694145/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/configuration.h
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/configuration.h b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/configuration.h
index 2d14867..1ea5561 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/configuration.h
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/configuration.h
@@ -16,6 +16,9 @@
  * limitations under the License.
  */
 
+#ifndef __YARN_CONTAINER_EXECUTOR_CONFIG_H__
+#define __YARN_CONTAINER_EXECUTOR_CONFIG_H__
+
 #ifdef __FreeBSD__
 #define _WITH_GETLINE
 #endif
@@ -23,62 +26,160 @@
 #include <stddef.h>
 
 /** Define a platform-independent constant instead of using PATH_MAX */
-
 #define EXECUTOR_PATH_MAX 4096
 
-/**
- * Ensure that the configuration file and all of the containing directories
- * are only writable by root. Otherwise, an attacker can change the
- * configuration and potentially cause damage.
- * returns 0 if permissions are ok
- */
-int check_configuration_permissions(const char* file_name);
-
-/**
- * Return a string with the configuration file path name resolved via realpath(3)
- *
- * NOTE: relative path names are resolved relative to the second argument not getwd(3)
- */
-char *resolve_config_path(const char* file_name, const char *root);
-
-// Config data structures.
-struct confentry {
+// Configuration data structures.
+struct kv_pair {
   const char *key;
   const char *value;
 };
 
+struct section {
+  int size;
+  char *name;
+  struct kv_pair **kv_pairs;
+};
+
 struct configuration {
   int size;
-  struct confentry **confdetails;
+  struct section **sections;
 };
 
-// read the given configuration file into the specified config struct.
-void read_config(const char* config_file, struct configuration *cfg);
+/**
+ * Function to ensure that the configuration file and all of the containing
+ * directories are only writable by root. Otherwise, an attacker can change
+ * the configuration and potentially cause damage.
+ *
+ * @param file_name name of the config file
+ *
+ * @returns 0 if permissions are correct, non-zero on error
+ */
+int check_configuration_permissions(const char *file_name);
+
+/**
+ * Return a string with the configuration file path name resolved via
+ * realpath(3). Relative path names are resolved relative to the second
+ * argument and not getwd(3). It's up to the caller to free the returned
+ * value.
+ *
+ * @param file_name name of the config file
+ * @param root the path against which relative path names are to be resolved
+ *
+ * @returns the resolved configuration file path
+ */
+char* resolve_config_path(const char *file_name, const char *root);
 
-//method exposed to get the configurations
-char *get_value(const char* key, struct configuration *cfg);
+/**
+ * Read the given configuration file into the specified configuration struct.
+ * It's the responsibility of the caller to call free_configurations to free
+ * the allocated memory. The function will check to ensure that the
+ * configuration file has the appropriate owner and permissions.
+ *
+ * @param file_path name of the configuration file to be read
+ * @param cfg the configuration structure to be filled.
+ *
+ * @return 0 on success, non-zero if there was an error
+ */
+int read_config(const char *file_path, struct configuration *cfg);
+
+/**
+ * Get the value for a key in the specified section. It's up to the caller to
+ * free the memory used for storing the return value.
+ *
+ * @param key key the name of the key
+ * @param section the section to be looked up
+ *
+ * @return pointer to the value if the key was found, null otherwise
+ */
+char* get_section_value(const char *key, const struct section *section);
 
-//function to return array of values pointing to the key. Values are
-//comma seperated strings.
-char ** get_values(const char* key, struct configuration *cfg);
+/**
+ * Function to get the values for a given key in the specified section.
+ * The value is split by ",". It's up to the caller to free the memory used
+ * for storing the return values.
+ *
+ * @param key the key to be looked up
+ * @param section the section to be looked up
+ *
+ * @return array of values, null if the key was not found
+ */
+char** get_section_values(const char *key, const struct section *section);
 
 /**
- * Function to return an array of values for a key, using the specified
- delimiter.
+ * Function to get the values for a given key in the specified section.
+ * The value is split by the specified delimiter. It's up to the caller to
+ * free the memory used for storing the return values.
+ *
+ * @param key the key to be looked up
+ * @param section the section to be looked up
+ * @param delimiter the delimiter to be used to split the value
+ *
+ * @return array of values, null if the key was not found
  */
-char ** get_values_delim(const char * key, struct configuration *cfg,
+char** get_section_values_delimiter(const char *key, const struct section *section,
     const char *delim);
 
-// Extracts array of values from the comma separated list of values.
-char ** extract_values(char *value);
+/**
+ * Get the value for a key in the specified section in the specified
+ * configuration. It's up to the caller to free the memory used for storing
+ * the return value.
+ *
+ * @param key key the name of the key
+ * @param section the name section to be looked up
+ * @param cfg the configuration to be used
+ *
+ * @return pointer to the value if the key was found, null otherwise
+ */
+char* get_configuration_value(const char *key, const char* section,
+    const struct configuration *cfg);
+
+/**
+ * Function to get the values for a given key in the specified section in the
+ * specified configuration. The value is split by ",". It's up to the caller to
+ * free the memory used for storing the return values.
+ *
+ * @param key the key to be looked up
+ * @param section the name of the section to be looked up
+ * @param cfg the configuration to be looked up
+ *
+ * @return array of values, null if the key was not found
+ */
+char** get_configuration_values(const char *key, const char* section,
+    const struct configuration *cfg);
 
-char ** extract_values_delim(char *value, const char *delim);
+/**
+ * Function to get the values for a given key in the specified section in the
+ * specified configuration. The value is split by the specified delimiter.
+ * It's up to the caller to free the memory used for storing the return values.
+ *
+ * @param key the key to be looked up
+ * @param section the name of the section to be looked up
+ * @param cfg the section to be looked up
+ * @param delimiter the delimiter to be used to split the value
+ *
+ * @return array of values, null if the key was not found
+ */
+char** get_configuration_values_delimiter(const char *key, const char* section,
+    const struct configuration *cfg, const char *delimiter);
 
-// free the memory returned by get_values
-void free_values(char** values);
+/**
+ * Function to retrieve the specified section from the configuration.
+ *
+ * @param section the name of the section to retrieve
+ * @param cfg the configuration structure to use
+ *
+ * @return pointer to section struct containing details of the section
+ *         null on error
+ */
+struct section* get_configuration_section(const char *section,
+    const struct configuration *cfg);
 
-//method to free allocated configuration
-void free_configurations(struct configuration *cfg);
+/**
+ * Method to free an allocated config struct.
+ *
+ * @param cfg pointer to the structure to free
+ */
+void free_configuration(struct configuration *cfg);
 
 /**
  * If str is a string of the form key=val, find 'key'
@@ -106,11 +207,4 @@ int get_kv_key(const char *input, char *out, size_t out_len);
  */
 int get_kv_value(const char *input, char *out, size_t out_len);
 
-/**
- * Trim whitespace from beginning and end.
- *
- * @param input    Input string that needs to be trimmed
- *
- * @return the trimmed string allocated with malloc. I has to be freed by the caller
-*/
-char* trim(char* input);
+#endif

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ec694145/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
index def628e..9f754c4 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
@@ -19,6 +19,8 @@
 #include "configuration.h"
 #include "container-executor.h"
 #include "utils/string-utils.h"
+#include "util.h"
+#include "config.h"
 
 #include <inttypes.h>
 #include <libgen.h>
@@ -43,8 +45,6 @@
 #include <getopt.h>
 #include <regex.h>
 
-#include "config.h"
-
 #ifndef HAVE_FCHMODAT
 #include "compat/fchmodat.h"
 #endif
@@ -92,7 +92,8 @@ FILE* ERRORFILE = NULL;
 static uid_t nm_uid = -1;
 static gid_t nm_gid = -1;
 
-struct configuration executor_cfg = {.size=0, .confdetails=NULL};
+struct configuration CFG = {.size=0, .sections=NULL};
+struct section executor_cfg = {.size=0, .kv_pairs=NULL};
 
 char *concatenate(char *concat_pattern, char *return_path_name,
    int numArgs, ...);
@@ -103,18 +104,25 @@ void set_nm_uid(uid_t user, gid_t group) {
 }
 
 //function used to load the configurations present in the secure config
-void read_executor_config(const char* file_name) {
-    read_config(file_name, &executor_cfg);
+void read_executor_config(const char *file_name) {
+  const struct section *tmp = NULL;
+  int ret = read_config(file_name, &CFG);
+  if (ret == 0) {
+    tmp = get_configuration_section("", &CFG);
+    if (tmp != NULL) {
+      executor_cfg = *tmp;
+    }
+  }
 }
 
 //function used to free executor configuration data
 void free_executor_configurations() {
-    free_configurations(&executor_cfg);
+    free_configuration(&CFG);
 }
 
 //Lookup nodemanager group from container executor configuration.
 char *get_nodemanager_group() {
-    return get_value(NM_GROUP_KEY, &executor_cfg);
+    return get_section_value(NM_GROUP_KEY, &executor_cfg);
 }
 
 int check_executor_permissions(char *executable_file) {
@@ -431,8 +439,8 @@ int change_user(uid_t user, gid_t group) {
 }
 
 int is_feature_enabled(const char* feature_key, int default_value,
-                              struct configuration *cfg) {
-    char *enabled_str = get_value(feature_key, cfg);
+                              struct section *cfg) {
+    char *enabled_str = get_section_value(feature_key, cfg);
     int enabled = default_value;
 
     if (enabled_str != NULL) {
@@ -753,7 +761,7 @@ static struct passwd* get_user_info(const char* user) {
 }
 
 int is_whitelisted(const char *user) {
-  char **whitelist = get_values(ALLOWED_SYSTEM_USERS_KEY, &executor_cfg);
+  char **whitelist = get_section_values(ALLOWED_SYSTEM_USERS_KEY, &executor_cfg);
   char **users = whitelist;
   if (whitelist != NULL) {
     for(; *users; ++users) {
@@ -781,7 +789,7 @@ struct passwd* check_user(const char *user) {
     fflush(LOGFILE);
     return NULL;
   }
-  char *min_uid_str = get_value(MIN_USERID_KEY, &executor_cfg);
+  char *min_uid_str = get_section_value(MIN_USERID_KEY, &executor_cfg);
   int min_uid = DEFAULT_MIN_USERID;
   if (min_uid_str != NULL) {
     char *end_ptr = NULL;
@@ -808,7 +816,7 @@ struct passwd* check_user(const char *user) {
     free(user_info);
     return NULL;
   }
-  char **banned_users = get_values(BANNED_USERS_KEY, &executor_cfg);
+  char **banned_users = get_section_values(BANNED_USERS_KEY, &executor_cfg);
   banned_users = banned_users == NULL ?
     (char**) DEFAULT_BANNED_USERS : banned_users;
   char **banned_user = banned_users;
@@ -1194,7 +1202,6 @@ char** tokenize_docker_command(const char *input, int *split_counter) {
   char *line = (char *)calloc(strlen(input) + 1, sizeof(char));
   char **linesplit = (char **) malloc(sizeof(char *));
   char *p = NULL;
-  int c = 0;
   *split_counter = 0;
   strncpy(line, input, strlen(input));
 
@@ -1408,12 +1415,12 @@ char* parse_docker_command_file(const char* command_file) {
 
 int run_docker(const char *command_file) {
   char* docker_command = parse_docker_command_file(command_file);
-  char* docker_binary = get_value(DOCKER_BINARY_KEY, &executor_cfg);
+  char* docker_binary = get_section_value(DOCKER_BINARY_KEY, &executor_cfg);
   docker_binary = check_docker_binary(docker_binary);
 
   char* docker_command_with_binary = calloc(sizeof(char), EXECUTOR_PATH_MAX);
   snprintf(docker_command_with_binary, EXECUTOR_PATH_MAX, "%s %s", docker_binary, docker_command);
-  char **args = extract_values_delim(docker_command_with_binary, " ");
+  char **args = split_delimiter(docker_command_with_binary, " ");
 
   int exit_code = -1;
   if (execvp(docker_binary, args) != 0) {
@@ -1574,7 +1581,7 @@ int launch_docker_container_as_user(const char * user, const char *app_id,
   uid_t prev_uid = geteuid();
 
   char *docker_command = parse_docker_command_file(command_file);
-  char *docker_binary = get_value(DOCKER_BINARY_KEY, &executor_cfg);
+  char *docker_binary = get_section_value(DOCKER_BINARY_KEY, &executor_cfg);
   docker_binary = check_docker_binary(docker_binary);
 
   fprintf(LOGFILE, "Creating script paths...\n");

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ec694145/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.h
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.h b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.h
index 1dc0491..ea8b5e3 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.h
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.h
@@ -35,51 +35,6 @@ enum command {
   LIST_AS_USER = 5
 };
 
-enum errorcodes {
-  INVALID_ARGUMENT_NUMBER = 1,
-  //INVALID_USER_NAME 2
-  INVALID_COMMAND_PROVIDED = 3,
-  // SUPER_USER_NOT_ALLOWED_TO_RUN_TASKS (NOT USED) 4
-  INVALID_NM_ROOT_DIRS = 5,
-  SETUID_OPER_FAILED, //6
-  UNABLE_TO_EXECUTE_CONTAINER_SCRIPT, //7
-  UNABLE_TO_SIGNAL_CONTAINER, //8
-  INVALID_CONTAINER_PID, //9
-  // ERROR_RESOLVING_FILE_PATH (NOT_USED) 10
-  // RELATIVE_PATH_COMPONENTS_IN_FILE_PATH (NOT USED) 11
-  // UNABLE_TO_STAT_FILE (NOT USED) 12
-  // FILE_NOT_OWNED_BY_ROOT (NOT USED) 13
-  // PREPARE_CONTAINER_DIRECTORIES_FAILED (NOT USED) 14
-  // INITIALIZE_CONTAINER_FAILED (NOT USED) 15
-  // PREPARE_CONTAINER_LOGS_FAILED (NOT USED) 16
-  // INVALID_LOG_DIR (NOT USED) 17
-  OUT_OF_MEMORY = 18,
-  // INITIALIZE_DISTCACHEFILE_FAILED (NOT USED) 19
-  INITIALIZE_USER_FAILED = 20,
-  PATH_TO_DELETE_IS_NULL, //21
-  INVALID_CONTAINER_EXEC_PERMISSIONS, //22
-  // PREPARE_JOB_LOGS_FAILED (NOT USED) 23
-  INVALID_CONFIG_FILE = 24,
-  SETSID_OPER_FAILED = 25,
-  WRITE_PIDFILE_FAILED = 26,
-  WRITE_CGROUP_FAILED = 27,
-  TRAFFIC_CONTROL_EXECUTION_FAILED = 28,
-  DOCKER_RUN_FAILED = 29,
-  ERROR_OPENING_DOCKER_FILE = 30,
-  ERROR_READING_DOCKER_FILE = 31,
-  FEATURE_DISABLED = 32,
-  COULD_NOT_CREATE_SCRIPT_COPY = 33,
-  COULD_NOT_CREATE_CREDENTIALS_FILE = 34,
-  COULD_NOT_CREATE_WORK_DIRECTORIES = 35,
-  COULD_NOT_CREATE_APP_LOG_DIRECTORIES = 36,
-  COULD_NOT_CREATE_TMP_DIRECTORIES = 37,
-  ERROR_CREATE_CONTAINER_DIRECTORIES_ARGUMENTS = 38,
-  ERROR_SANITIZING_DOCKER_COMMAND = 39,
-  DOCKER_IMAGE_INVALID = 40,
-  DOCKER_CONTAINER_NAME_INVALID = 41,
-  ERROR_COMPILING_REGEX = 42
-};
-
 enum operations {
   CHECK_SETUP = 1,
   MOUNT_CGROUPS = 2,
@@ -111,11 +66,6 @@ enum operations {
 
 extern struct passwd *user_detail;
 
-// the log file for messages
-extern FILE *LOGFILE;
-// the log file for error messages
-extern FILE *ERRORFILE;
-
 // get the executable's filename
 char* get_executable(char *argv0);
 
@@ -276,7 +226,7 @@ int create_validate_dir(const char* npath, mode_t perm, const char* path,
 
 /** Check if a feature is enabled in the specified configuration. */
 int is_feature_enabled(const char* feature_key, int default_value,
-                              struct configuration *cfg);
+                              struct section *cfg);
 
 /** Check if tc (traffic control) support is enabled in configuration. */
 int is_tc_support_enabled();

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ec694145/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/get_executable.c
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/get_executable.c b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/get_executable.c
index ce46b77..55973a2 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/get_executable.c
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/get_executable.c
@@ -31,6 +31,7 @@
 #include "config.h"
 #include "configuration.h"
 #include "container-executor.h"
+#include "util.h"
 
 #include <errno.h>
 #include <stdio.h>

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ec694145/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/main.c
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/main.c b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/main.c
index fdc0496..b2187c9 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/main.c
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/main.c
@@ -19,6 +19,7 @@
 #include "config.h"
 #include "configuration.h"
 #include "container-executor.h"
+#include "util.h"
 
 #include <errno.h>
 #include <grp.h>
@@ -420,7 +421,7 @@ static int validate_run_as_user_commands(int argc, char **argv, int *operation)
 
       cmd_input.resources_key = resources_key;
       cmd_input.resources_value = resources_value;
-      cmd_input.resources_values = extract_values(resources_value);
+      cmd_input.resources_values = split(resources_value);
       *operation = RUN_AS_USER_LAUNCH_DOCKER_CONTAINER;
       return 0;
    } else {
@@ -471,7 +472,7 @@ static int validate_run_as_user_commands(int argc, char **argv, int *operation)
 
     cmd_input.resources_key = resources_key;
     cmd_input.resources_value = resources_value;
-    cmd_input.resources_values = extract_values(resources_value);
+    cmd_input.resources_values = split(resources_value);
     *operation = RUN_AS_USER_LAUNCH_CONTAINER;
     return 0;
 
@@ -565,8 +566,8 @@ int main(int argc, char **argv) {
     exit_code = initialize_app(cmd_input.yarn_user_name,
                             cmd_input.app_id,
                             cmd_input.cred_file,
-                            extract_values(cmd_input.local_dirs),
-                            extract_values(cmd_input.log_dirs),
+                            split(cmd_input.local_dirs),
+                            split(cmd_input.log_dirs),
                             argv + optind);
     break;
   case RUN_AS_USER_LAUNCH_DOCKER_CONTAINER:
@@ -591,8 +592,8 @@ int main(int argc, char **argv) {
                       cmd_input.script_file,
                       cmd_input.cred_file,
                       cmd_input.pid_file,
-                      extract_values(cmd_input.local_dirs),
-                      extract_values(cmd_input.log_dirs),
+                      split(cmd_input.local_dirs),
+                      split(cmd_input.log_dirs),
                       cmd_input.docker_command_file,
                       cmd_input.resources_key,
                       cmd_input.resources_values);
@@ -619,8 +620,8 @@ int main(int argc, char **argv) {
                     cmd_input.script_file,
                     cmd_input.cred_file,
                     cmd_input.pid_file,
-                    extract_values(cmd_input.local_dirs),
-                    extract_values(cmd_input.log_dirs),
+                    split(cmd_input.local_dirs),
+                    split(cmd_input.log_dirs),
                     cmd_input.resources_key,
                     cmd_input.resources_values);
     free(cmd_input.resources_key);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ec694145/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/util.c
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/util.c b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/util.c
new file mode 100644
index 0000000..8e39ca8
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/util.c
@@ -0,0 +1,134 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "util.h"
+#include <stddef.h>
+#include <stdlib.h>
+#include <string.h>
+#include <ctype.h>
+
+char** split_delimiter(char *value, const char *delim) {
+  char **return_values = NULL;
+  char *temp_tok = NULL;
+  char *tempstr = NULL;
+  int size = 0;
+  int per_alloc_size = 10;
+  int return_values_size = per_alloc_size;
+  int failed = 0;
+
+  //first allocate any array of 10
+  if(value != NULL) {
+    return_values = (char **) malloc(sizeof(char *) * return_values_size);
+    if (!return_values) {
+      fprintf(ERRORFILE, "Allocation error for return_values in %s.\n",
+              __func__);
+      failed = 1;
+      goto cleanup;
+    }
+    memset(return_values, 0, sizeof(char *) * return_values_size);
+
+    temp_tok = strtok_r(value, delim, &tempstr);
+    while (temp_tok != NULL) {
+      temp_tok = strdup(temp_tok);
+      if (NULL == temp_tok) {
+        fprintf(ERRORFILE, "Allocation error in %s.\n", __func__);
+        failed = 1;
+        goto cleanup;
+      }
+
+      return_values[size++] = temp_tok;
+
+      // Make sure returned values has enough space for the trailing NULL.
+      if (size >= return_values_size - 1) {
+        return_values_size += per_alloc_size;
+        return_values = (char **) realloc(return_values,(sizeof(char *) *
+          return_values_size));
+
+        // Make sure new added memory are filled with NULL
+        for (int i = size; i < return_values_size; i++) {
+          return_values[i] = NULL;
+        }
+      }
+      temp_tok = strtok_r(NULL, delim, &tempstr);
+    }
+  }
+
+  // Put trailing NULL to indicate values terminates.
+  if (return_values != NULL) {
+    return_values[size] = NULL;
+  }
+
+cleanup:
+  if (failed) {
+    free_values(return_values);
+    return NULL;
+  }
+
+  return return_values;
+}
+
+/**
+ * Extracts array of values from the '%' separated list of values.
+ */
+char** split(char *value) {
+  return split_delimiter(value, "%");
+}
+
+// free an entry set of values
+void free_values(char** values) {
+  if (values != NULL) {
+    int idx = 0;
+    while (values[idx]) {
+      free(values[idx]);
+      idx++;
+    }
+    free(values);
+  }
+}
+
+/**
+ * Trim whitespace from beginning and end.
+*/
+char* trim(const char* input) {
+    const char *val_begin;
+    const char *val_end;
+    char *ret;
+
+    if (input == NULL) {
+      return NULL;
+    }
+
+    val_begin = input;
+    val_end = input + strlen(input);
+
+    while (val_begin < val_end && isspace(*val_begin))
+      val_begin++;
+    while (val_end > val_begin && isspace(*(val_end - 1)))
+      val_end--;
+
+    ret = (char *) malloc(
+            sizeof(char) * (val_end - val_begin + 1));
+    if (ret == NULL) {
+      fprintf(ERRORFILE, "Allocation error\n");
+      exit(OUT_OF_MEMORY);
+    }
+
+    strncpy(ret, val_begin, val_end - val_begin);
+    ret[val_end - val_begin] = '\0';
+    return ret;
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ec694145/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/util.h
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/util.h b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/util.h
new file mode 100644
index 0000000..a8a12a9
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/util.h
@@ -0,0 +1,115 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#ifndef __YARN_POSIX_CONTAINER_EXECUTOR_UTIL_H__
+#define __YARN_POSIX_CONTAINER_EXECUTOR_UTIL_H__
+
+#include <stdio.h>
+
+enum errorcodes {
+  INVALID_ARGUMENT_NUMBER = 1,
+  //INVALID_USER_NAME 2
+  INVALID_COMMAND_PROVIDED = 3,
+  // SUPER_USER_NOT_ALLOWED_TO_RUN_TASKS (NOT USED) 4
+  INVALID_NM_ROOT_DIRS = 5,
+  SETUID_OPER_FAILED, //6
+  UNABLE_TO_EXECUTE_CONTAINER_SCRIPT, //7
+  UNABLE_TO_SIGNAL_CONTAINER, //8
+  INVALID_CONTAINER_PID, //9
+  // ERROR_RESOLVING_FILE_PATH (NOT_USED) 10
+  // RELATIVE_PATH_COMPONENTS_IN_FILE_PATH (NOT USED) 11
+  // UNABLE_TO_STAT_FILE (NOT USED) 12
+  // FILE_NOT_OWNED_BY_ROOT (NOT USED) 13
+  // PREPARE_CONTAINER_DIRECTORIES_FAILED (NOT USED) 14
+  // INITIALIZE_CONTAINER_FAILED (NOT USED) 15
+  // PREPARE_CONTAINER_LOGS_FAILED (NOT USED) 16
+  // INVALID_LOG_DIR (NOT USED) 17
+  OUT_OF_MEMORY = 18,
+  // INITIALIZE_DISTCACHEFILE_FAILED (NOT USED) 19
+  INITIALIZE_USER_FAILED = 20,
+  PATH_TO_DELETE_IS_NULL, //21
+  INVALID_CONTAINER_EXEC_PERMISSIONS, //22
+  // PREPARE_JOB_LOGS_FAILED (NOT USED) 23
+  INVALID_CONFIG_FILE = 24,
+  SETSID_OPER_FAILED = 25,
+  WRITE_PIDFILE_FAILED = 26,
+  WRITE_CGROUP_FAILED = 27,
+  TRAFFIC_CONTROL_EXECUTION_FAILED = 28,
+  DOCKER_RUN_FAILED = 29,
+  ERROR_OPENING_DOCKER_FILE = 30,
+  ERROR_READING_DOCKER_FILE = 31,
+  FEATURE_DISABLED = 32,
+  COULD_NOT_CREATE_SCRIPT_COPY = 33,
+  COULD_NOT_CREATE_CREDENTIALS_FILE = 34,
+  COULD_NOT_CREATE_WORK_DIRECTORIES = 35,
+  COULD_NOT_CREATE_APP_LOG_DIRECTORIES = 36,
+  COULD_NOT_CREATE_TMP_DIRECTORIES = 37,
+  ERROR_CREATE_CONTAINER_DIRECTORIES_ARGUMENTS = 38,
+  ERROR_SANITIZING_DOCKER_COMMAND = 39,
+  DOCKER_IMAGE_INVALID = 40,
+  DOCKER_CONTAINER_NAME_INVALID = 41,
+  ERROR_COMPILING_REGEX = 42
+};
+
+
+// the log file for messages
+extern FILE *LOGFILE;
+// the log file for error messages
+extern FILE *ERRORFILE;
+/**
+ * Function to split the given string using '%' as the separator. It's
+ * up to the caller to free the memory for the returned array. Use the
+ * free_values function to free the allocated memory.
+ *
+ * @param str the string to split
+ *
+ * @return an array of strings
+ */
+char** split(char *str);
+
+/**
+ * Function to split the given string using the delimiter specified. It's
+ * up to the caller to free the memory for the returned array. Use the
+ * free_values function to free the allocated memory.
+ *
+ * @param str the string to split
+ * @param delimiter the delimiter to use
+ *
+ * @return an array of strings
+ */
+char** split_delimiter(char *value, const char *delimiter);
+
+/**
+ * Function to free an array of strings.
+ *
+ * @param values the array to free
+ *
+ */
+void free_values(char **values);
+
+/**
+ * Trim whitespace from beginning and end. The returned string has to be freed
+ * by the caller.
+ *
+ * @param input    Input string that needs to be trimmed
+ *
+ * @return the trimmed string allocated with malloc
+*/
+char* trim(const char *input);
+
+#endif

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ec694145/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/resources/test/test-configurations/configuration-1.cfg
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/resources/test/test-configurations/configuration-1.cfg b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/resources/test/test-configurations/configuration-1.cfg
new file mode 100644
index 0000000..4d0b90d
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/resources/test/test-configurations/configuration-1.cfg
@@ -0,0 +1,31 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+[section-1]
+key1=value1
+split-key=val1,val2,val3
+perc-key=perc-val1%perc-val2
+# some comment
+
+[split-section]
+key3=value3
+
+[section-2]
+key1=value2
+
+key2=value2
+
+[split-section]
+key4=value4

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ec694145/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/resources/test/test-configurations/configuration-2.cfg
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/resources/test/test-configurations/configuration-2.cfg b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/resources/test/test-configurations/configuration-2.cfg
new file mode 100644
index 0000000..aa02db8
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/resources/test/test-configurations/configuration-2.cfg
@@ -0,0 +1,28 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Test mixed mode config file
+# Initial few lines are in the key=value format
+# and then the sections start
+
+key1=value1
+key2=value2
+
+
+[section-1]
+key3=value3
+key1=value4
+

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ec694145/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/resources/test/test-configurations/old-config.cfg
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/resources/test/test-configurations/old-config.cfg b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/resources/test/test-configurations/old-config.cfg
new file mode 100644
index 0000000..947a3fa
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/resources/test/test-configurations/old-config.cfg
@@ -0,0 +1,25 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+yarn.nodemanager.linux-container-executor.group=yarn
+banned.users=root,testuser1,testuser2#comma separated list of users who can not run applications
+min.user.id=1000
+allowed.system.users=nobody,daemon
+feature.docker.enabled=1
+feature.tc.enabled=0
+docker.binary=/usr/bin/docker
+yarn.local.dirs=/var/run/yarn%/tmp/mydir
+test.key=#no value for this key
+# test.key2=0

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ec694145/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test-container-executor.c
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test-container-executor.c b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test-container-executor.c
index 3202652..3cfefa0 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test-container-executor.c
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test-container-executor.c
@@ -18,6 +18,7 @@
 #include "configuration.h"
 #include "container-executor.h"
 #include "utils/string-utils.h"
+#include "util.h"
 
 #include <inttypes.h>
 #include <errno.h>
@@ -404,7 +405,7 @@ void test_delete_app() {
 }
 
 void validate_feature_enabled_value(int expected_value, const char* key,
-    int default_value, struct configuration *cfg) {
+    int default_value, struct section *cfg) {
   int value = is_feature_enabled(key, default_value, cfg);
 
   if (value != expected_value) {
@@ -419,7 +420,8 @@ void test_is_feature_enabled() {
   FILE *file = fopen(filename, "w");
   int disabled = 0;
   int enabled = 1;
-  struct configuration cfg = {.size=0, .confdetails=NULL};
+  struct configuration exec_cfg = {.size=0, .sections=NULL};
+  struct section cfg = {.size=0, .kv_pairs=NULL};
 
   if (file == NULL) {
     printf("FAIL: Could not open configuration file: %s\n", filename);
@@ -433,7 +435,8 @@ void test_is_feature_enabled() {
   fprintf(file, "feature.name5.enabled=-1\n");
   fprintf(file, "feature.name6.enabled=2\n");
   fclose(file);
-  read_config(filename, &cfg);
+  read_config(filename, &exec_cfg);
+  cfg = *(get_configuration_section("", &exec_cfg));
 
   validate_feature_enabled_value(disabled, "feature.name1.enabled",
       disabled, &cfg);
@@ -449,7 +452,7 @@ void test_is_feature_enabled() {
           disabled, &cfg);
 
 
-  free_configurations(&cfg);
+  free_configuration(&exec_cfg);
 }
 
 void test_delete_user() {
@@ -1345,8 +1348,8 @@ int main(int argc, char **argv) {
 
   read_executor_config(TEST_ROOT "/test.cfg");
 
-  local_dirs = extract_values(strdup(NM_LOCAL_DIRS));
-  log_dirs = extract_values(strdup(NM_LOG_DIRS));
+  local_dirs = split(strdup(NM_LOCAL_DIRS));
+  log_dirs = split(strdup(NM_LOG_DIRS));
 
   create_nm_roots(local_dirs);
 


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[35/50] [abbrv] hadoop git commit: YARN-6958. Moving logging APIs over to slf4j in hadoop-yarn-server-timelineservice. Contributed by Yeliang Cang.

Posted by wa...@apache.org.
YARN-6958. Moving logging APIs over to slf4j in hadoop-yarn-server-timelineservice. Contributed by Yeliang Cang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/63cfcb90
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/63cfcb90
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/63cfcb90

Branch: refs/heads/YARN-5881
Commit: 63cfcb90ac6fbb79ba9ed6b3044cd999fc74e58c
Parents: 69afa26
Author: Akira Ajisaka <aa...@apache.org>
Authored: Wed Aug 9 23:58:22 2017 +0900
Committer: Akira Ajisaka <aa...@apache.org>
Committed: Wed Aug 9 23:58:22 2017 +0900

----------------------------------------------------------------------
 .../server/timeline/LevelDBCacheTimelineStore.java    | 14 +++++++-------
 .../reader/filter/TimelineFilterUtils.java            |  7 ++++---
 .../storage/HBaseTimelineReaderImpl.java              |  8 ++++----
 .../storage/HBaseTimelineWriterImpl.java              |  8 ++++----
 .../storage/TimelineSchemaCreator.java                |  7 ++++---
 .../storage/application/ApplicationTable.java         |  7 ++++---
 .../storage/apptoflow/AppToFlowTable.java             |  7 ++++---
 .../timelineservice/storage/common/ColumnHelper.java  |  8 +++++---
 .../storage/common/HBaseTimelineStorageUtils.java     |  8 ++++----
 .../timelineservice/storage/entity/EntityTable.java   |  7 ++++---
 .../storage/flow/FlowActivityTable.java               |  7 ++++---
 .../storage/flow/FlowRunCoprocessor.java              |  7 ++++---
 .../timelineservice/storage/flow/FlowRunTable.java    |  7 ++++---
 .../timelineservice/storage/flow/FlowScanner.java     |  7 ++++---
 .../storage/reader/TimelineEntityReader.java          |  7 ++++---
 .../collector/AppLevelTimelineCollector.java          |  7 ++++---
 .../collector/NodeTimelineCollectorManager.java       |  8 ++++----
 .../PerNodeTimelineCollectorsAuxService.java          | 10 +++++-----
 .../timelineservice/collector/TimelineCollector.java  |  7 ++++---
 .../collector/TimelineCollectorManager.java           |  8 ++++----
 .../collector/TimelineCollectorWebService.java        |  8 ++++----
 .../timelineservice/reader/TimelineReaderServer.java  |  9 +++++----
 .../reader/TimelineReaderWebServices.java             |  8 ++++----
 .../storage/FileSystemTimelineReaderImpl.java         |  8 ++++----
 .../storage/common/TimelineStorageUtils.java          |  4 ----
 25 files changed, 102 insertions(+), 91 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/63cfcb90/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timeline-pluginstorage/src/main/java/org/apache/hadoop/yarn/server/timeline/LevelDBCacheTimelineStore.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timeline-pluginstorage/src/main/java/org/apache/hadoop/yarn/server/timeline/LevelDBCacheTimelineStore.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timeline-pluginstorage/src/main/java/org/apache/hadoop/yarn/server/timeline/LevelDBCacheTimelineStore.java
index 7379dd6..f7a3d01 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timeline-pluginstorage/src/main/java/org/apache/hadoop/yarn/server/timeline/LevelDBCacheTimelineStore.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timeline-pluginstorage/src/main/java/org/apache/hadoop/yarn/server/timeline/LevelDBCacheTimelineStore.java
@@ -19,8 +19,6 @@
 package org.apache.hadoop.yarn.server.timeline;
 
 import com.fasterxml.jackson.databind.ObjectMapper;
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.classification.InterfaceAudience.Private;
 import org.apache.hadoop.classification.InterfaceStability.Unstable;
 import org.apache.hadoop.conf.Configuration;
@@ -34,6 +32,8 @@ import org.fusesource.leveldbjni.JniDBFactory;
 import org.iq80.leveldb.DB;
 import org.iq80.leveldb.DBIterator;
 import org.iq80.leveldb.Options;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 import java.io.File;
 import java.io.IOException;
@@ -58,8 +58,8 @@ import java.util.Map;
 @Private
 @Unstable
 public class LevelDBCacheTimelineStore extends KeyValueBasedTimelineStore {
-  private static final Log LOG
-      = LogFactory.getLog(LevelDBCacheTimelineStore.class);
+  private static final Logger LOG
+      = LoggerFactory.getLogger(LevelDBCacheTimelineStore.class);
   private static final String CACHED_LDB_FILE_PREFIX = "-timeline-cache.ldb";
   private String dbId;
   private DB entityDb;
@@ -102,7 +102,7 @@ public class LevelDBCacheTimelineStore extends KeyValueBasedTimelineStore {
         localFS.setPermission(dbPath, LeveldbUtils.LEVELDB_DIR_UMASK);
       }
     } finally {
-      IOUtils.cleanup(LOG, localFS);
+      IOUtils.cleanupWithLogger(LOG, localFS);
     }
     LOG.info("Using leveldb path " + dbPath);
     entityDb = factory.open(new File(dbPath.toString()), options);
@@ -113,7 +113,7 @@ public class LevelDBCacheTimelineStore extends KeyValueBasedTimelineStore {
 
   @Override
   protected synchronized void serviceStop() throws Exception {
-    IOUtils.cleanup(LOG, entityDb);
+    IOUtils.cleanupWithLogger(LOG, entityDb);
     Path dbPath = new Path(
         configuration.get(YarnConfiguration.TIMELINE_SERVICE_LEVELDB_PATH),
         dbId + CACHED_LDB_FILE_PREFIX);
@@ -125,7 +125,7 @@ public class LevelDBCacheTimelineStore extends KeyValueBasedTimelineStore {
               "timeline store " + dbPath);
       }
     } finally {
-      IOUtils.cleanup(LOG, localFS);
+      IOUtils.cleanupWithLogger(LOG, localFS);
     }
     super.serviceStop();
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/63cfcb90/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/reader/filter/TimelineFilterUtils.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/reader/filter/TimelineFilterUtils.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/reader/filter/TimelineFilterUtils.java
index cccae26..a934a3d 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/reader/filter/TimelineFilterUtils.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/reader/filter/TimelineFilterUtils.java
@@ -22,8 +22,6 @@ import java.io.IOException;
 import java.util.HashSet;
 import java.util.Set;
 
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.hbase.filter.BinaryComparator;
 import org.apache.hadoop.hbase.filter.BinaryPrefixComparator;
 import org.apache.hadoop.hbase.filter.FamilyFilter;
@@ -36,13 +34,16 @@ import org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnFamily
 import org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnPrefix;
 import org.apache.hadoop.hbase.filter.QualifierFilter;
 import org.apache.hadoop.hbase.filter.SingleColumnValueFilter;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 /**
  * Set of utility methods used by timeline filter classes.
  */
 public final class TimelineFilterUtils {
 
-  private static final Log LOG = LogFactory.getLog(TimelineFilterUtils.class);
+  private static final Logger LOG =
+      LoggerFactory.getLogger(TimelineFilterUtils.class);
 
   private TimelineFilterUtils() {
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/63cfcb90/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/HBaseTimelineReaderImpl.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/HBaseTimelineReaderImpl.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/HBaseTimelineReaderImpl.java
index a384a84..dc50f42 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/HBaseTimelineReaderImpl.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/HBaseTimelineReaderImpl.java
@@ -21,8 +21,6 @@ package org.apache.hadoop.yarn.server.timelineservice.storage;
 import java.io.IOException;
 import java.util.Set;
 
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.HBaseConfiguration;
 import org.apache.hadoop.hbase.client.Connection;
@@ -34,6 +32,8 @@ import org.apache.hadoop.yarn.server.timelineservice.reader.TimelineEntityFilter
 import org.apache.hadoop.yarn.server.timelineservice.reader.TimelineReaderContext;
 import org.apache.hadoop.yarn.server.timelineservice.storage.reader.TimelineEntityReader;
 import org.apache.hadoop.yarn.server.timelineservice.storage.reader.TimelineEntityReaderFactory;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 /**
  * HBase based implementation for {@link TimelineReader}.
@@ -41,8 +41,8 @@ import org.apache.hadoop.yarn.server.timelineservice.storage.reader.TimelineEnti
 public class HBaseTimelineReaderImpl
     extends AbstractService implements TimelineReader {
 
-  private static final Log LOG = LogFactory
-      .getLog(HBaseTimelineReaderImpl.class);
+  private static final Logger LOG = LoggerFactory
+      .getLogger(HBaseTimelineReaderImpl.class);
 
   private Configuration hbaseConf = null;
   private Connection conn;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/63cfcb90/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/HBaseTimelineWriterImpl.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/HBaseTimelineWriterImpl.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/HBaseTimelineWriterImpl.java
index b94b85f..afa58cb 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/HBaseTimelineWriterImpl.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/HBaseTimelineWriterImpl.java
@@ -21,8 +21,6 @@ import java.io.IOException;
 import java.util.Map;
 import java.util.Set;
 
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configuration;
@@ -65,6 +63,8 @@ import org.apache.hadoop.yarn.server.timelineservice.storage.flow.FlowRunColumn;
 import org.apache.hadoop.yarn.server.timelineservice.storage.flow.FlowRunColumnPrefix;
 import org.apache.hadoop.yarn.server.timelineservice.storage.flow.FlowRunRowKey;
 import org.apache.hadoop.yarn.server.timelineservice.storage.flow.FlowRunTable;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 /**
  * This implements a hbase based backend for storing the timeline entity
@@ -76,8 +76,8 @@ import org.apache.hadoop.yarn.server.timelineservice.storage.flow.FlowRunTable;
 public class HBaseTimelineWriterImpl extends AbstractService implements
     TimelineWriter {
 
-  private static final Log LOG = LogFactory
-      .getLog(HBaseTimelineWriterImpl.class);
+  private static final Logger LOG = LoggerFactory
+      .getLogger(HBaseTimelineWriterImpl.class);
 
   private Connection conn;
   private TypedBufferedMutator<EntityTable> entityTable;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/63cfcb90/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/TimelineSchemaCreator.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/TimelineSchemaCreator.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/TimelineSchemaCreator.java
index b3b749e..dbed05d 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/TimelineSchemaCreator.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/TimelineSchemaCreator.java
@@ -29,8 +29,6 @@ import org.apache.commons.cli.Options;
 import org.apache.commons.cli.ParseException;
 import org.apache.commons.cli.PosixParser;
 import org.apache.commons.lang.StringUtils;
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configuration;
@@ -46,6 +44,8 @@ import org.apache.hadoop.yarn.server.timelineservice.storage.flow.FlowActivityTa
 import org.apache.hadoop.yarn.server.timelineservice.storage.flow.FlowRunTable;
 
 import com.google.common.annotations.VisibleForTesting;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 /**
  * This creates the schema for a hbase based backend for storing application
@@ -58,7 +58,8 @@ public final class TimelineSchemaCreator {
   }
 
   final static String NAME = TimelineSchemaCreator.class.getSimpleName();
-  private static final Log LOG = LogFactory.getLog(TimelineSchemaCreator.class);
+  private static final Logger LOG =
+      LoggerFactory.getLogger(TimelineSchemaCreator.class);
   private static final String SKIP_EXISTING_TABLE_OPTION_SHORT = "s";
   private static final String APP_METRICS_TTL_OPTION_SHORT = "ma";
   private static final String APP_TABLE_NAME_SHORT = "a";

http://git-wip-us.apache.org/repos/asf/hadoop/blob/63cfcb90/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/application/ApplicationTable.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/application/ApplicationTable.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/application/ApplicationTable.java
index cb4fc92..d3bdd39 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/application/ApplicationTable.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/application/ApplicationTable.java
@@ -19,8 +19,6 @@ package org.apache.hadoop.yarn.server.timelineservice.storage.application;
 
 import java.io.IOException;
 
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.HColumnDescriptor;
 import org.apache.hadoop.hbase.HTableDescriptor;
@@ -30,6 +28,8 @@ import org.apache.hadoop.hbase.regionserver.BloomType;
 import org.apache.hadoop.yarn.conf.YarnConfiguration;
 import org.apache.hadoop.yarn.server.timelineservice.storage.common.BaseTable;
 import org.apache.hadoop.yarn.server.timelineservice.storage.common.TimelineHBaseSchemaConstants;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 /**
  * The application table as column families info, config and metrics. Info
@@ -99,7 +99,8 @@ public class ApplicationTable extends BaseTable<ApplicationTable> {
   /** default max number of versions. */
   private static final int DEFAULT_METRICS_MAX_VERSIONS = 10000;
 
-  private static final Log LOG = LogFactory.getLog(ApplicationTable.class);
+  private static final Logger LOG =
+      LoggerFactory.getLogger(ApplicationTable.class);
 
   public ApplicationTable() {
     super(TABLE_NAME_CONF_NAME, DEFAULT_TABLE_NAME);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/63cfcb90/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/apptoflow/AppToFlowTable.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/apptoflow/AppToFlowTable.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/apptoflow/AppToFlowTable.java
index 301cf99..40d95a4 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/apptoflow/AppToFlowTable.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/apptoflow/AppToFlowTable.java
@@ -18,8 +18,6 @@
 package org.apache.hadoop.yarn.server.timelineservice.storage.apptoflow;
 
 
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.HColumnDescriptor;
 import org.apache.hadoop.hbase.HTableDescriptor;
@@ -29,6 +27,8 @@ import org.apache.hadoop.hbase.regionserver.BloomType;
 import org.apache.hadoop.yarn.conf.YarnConfiguration;
 import org.apache.hadoop.yarn.server.timelineservice.storage.common.BaseTable;
 import org.apache.hadoop.yarn.server.timelineservice.storage.common.TimelineHBaseSchemaConstants;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 import java.io.IOException;
 
@@ -68,7 +68,8 @@ public class AppToFlowTable extends BaseTable<AppToFlowTable> {
   /** default value for app_flow table name. */
   private static final String DEFAULT_TABLE_NAME = "timelineservice.app_flow";
 
-  private static final Log LOG = LogFactory.getLog(AppToFlowTable.class);
+  private static final Logger LOG =
+      LoggerFactory.getLogger(AppToFlowTable.class);
 
   public AppToFlowTable() {
     super(TABLE_NAME_CONF_NAME, DEFAULT_TABLE_NAME);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/63cfcb90/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/common/ColumnHelper.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/common/ColumnHelper.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/common/ColumnHelper.java
index be55db5..a9c2148 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/common/ColumnHelper.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/common/ColumnHelper.java
@@ -24,13 +24,14 @@ import java.util.Map.Entry;
 import java.util.NavigableMap;
 import java.util.TreeMap;
 
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.hbase.client.Put;
 import org.apache.hadoop.hbase.client.Result;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.hadoop.yarn.server.timelineservice.storage.flow.AggregationCompactionDimension;
 import org.apache.hadoop.yarn.server.timelineservice.storage.flow.Attribute;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
 /**
  * This class is meant to be used only by explicit Columns, and not directly to
  * write by clients.
@@ -38,7 +39,8 @@ import org.apache.hadoop.yarn.server.timelineservice.storage.flow.Attribute;
  * @param <T> refers to the table.
  */
 public class ColumnHelper<T> {
-  private static final Log LOG = LogFactory.getLog(ColumnHelper.class);
+  private static final Logger LOG =
+      LoggerFactory.getLogger(ColumnHelper.class);
 
   private final ColumnFamily<T> columnFamily;
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/63cfcb90/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/common/HBaseTimelineStorageUtils.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/common/HBaseTimelineStorageUtils.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/common/HBaseTimelineStorageUtils.java
index e93b470..b6f1157 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/common/HBaseTimelineStorageUtils.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/common/HBaseTimelineStorageUtils.java
@@ -17,8 +17,6 @@
 
 package org.apache.hadoop.yarn.server.timelineservice.storage.common;
 
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.Cell;
 import org.apache.hadoop.hbase.CellUtil;
@@ -30,6 +28,8 @@ import org.apache.hadoop.yarn.server.timelineservice.storage.flow.AggregationCom
 import org.apache.hadoop.yarn.server.timelineservice.storage.flow.AggregationOperation;
 import org.apache.hadoop.yarn.server.timelineservice.storage.flow.Attribute;
 import org.apache.hadoop.yarn.server.timelineservice.storage.flow.FlowRunTable;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 import java.io.IOException;
 import java.util.List;
@@ -41,8 +41,8 @@ import java.util.Map;
 public final class HBaseTimelineStorageUtils {
   /** milliseconds in one day. */
   public static final long MILLIS_ONE_DAY = 86400000L;
-  private static final Log LOG =
-      LogFactory.getLog(HBaseTimelineStorageUtils.class);
+  private static final Logger LOG =
+      LoggerFactory.getLogger(HBaseTimelineStorageUtils.class);
 
   private HBaseTimelineStorageUtils() {
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/63cfcb90/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/entity/EntityTable.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/entity/EntityTable.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/entity/EntityTable.java
index ddf0406..df5ce69 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/entity/EntityTable.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/entity/EntityTable.java
@@ -19,8 +19,6 @@ package org.apache.hadoop.yarn.server.timelineservice.storage.entity;
 
 import java.io.IOException;
 
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.HColumnDescriptor;
 import org.apache.hadoop.hbase.HTableDescriptor;
@@ -30,6 +28,8 @@ import org.apache.hadoop.hbase.regionserver.BloomType;
 import org.apache.hadoop.yarn.conf.YarnConfiguration;
 import org.apache.hadoop.yarn.server.timelineservice.storage.common.BaseTable;
 import org.apache.hadoop.yarn.server.timelineservice.storage.common.TimelineHBaseSchemaConstants;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 /**
  * The entity table as column families info, config and metrics. Info stores
@@ -99,7 +99,8 @@ public class EntityTable extends BaseTable<EntityTable> {
   /** default max number of versions. */
   private static final int DEFAULT_METRICS_MAX_VERSIONS = 10000;
 
-  private static final Log LOG = LogFactory.getLog(EntityTable.class);
+  private static final Logger LOG =
+      LoggerFactory.getLogger(EntityTable.class);
 
   public EntityTable() {
     super(TABLE_NAME_CONF_NAME, DEFAULT_TABLE_NAME);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/63cfcb90/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/flow/FlowActivityTable.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/flow/FlowActivityTable.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/flow/FlowActivityTable.java
index 8a0430c..e646eb2 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/flow/FlowActivityTable.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/flow/FlowActivityTable.java
@@ -19,8 +19,6 @@ package org.apache.hadoop.yarn.server.timelineservice.storage.flow;
 
 import java.io.IOException;
 
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.HColumnDescriptor;
 import org.apache.hadoop.hbase.HTableDescriptor;
@@ -29,6 +27,8 @@ import org.apache.hadoop.hbase.client.Admin;
 import org.apache.hadoop.hbase.regionserver.BloomType;
 import org.apache.hadoop.yarn.conf.YarnConfiguration;
 import org.apache.hadoop.yarn.server.timelineservice.storage.common.BaseTable;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 /**
  * The flow activity table has column family info
@@ -63,7 +63,8 @@ public class FlowActivityTable extends BaseTable<FlowActivityTable> {
   public static final String DEFAULT_TABLE_NAME =
       "timelineservice.flowactivity";
 
-  private static final Log LOG = LogFactory.getLog(FlowActivityTable.class);
+  private static final Logger LOG =
+      LoggerFactory.getLogger(FlowActivityTable.class);
 
   /** default max number of versions. */
   public static final int DEFAULT_METRICS_MAX_VERSIONS = Integer.MAX_VALUE;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/63cfcb90/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/flow/FlowRunCoprocessor.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/flow/FlowRunCoprocessor.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/flow/FlowRunCoprocessor.java
index 2be6ef8..221420e 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/flow/FlowRunCoprocessor.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/flow/FlowRunCoprocessor.java
@@ -24,8 +24,6 @@ import java.util.Map;
 import java.util.NavigableMap;
 import java.util.TreeMap;
 
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.hbase.Cell;
 import org.apache.hadoop.hbase.CellUtil;
 import org.apache.hadoop.hbase.CoprocessorEnvironment;
@@ -50,13 +48,16 @@ import org.apache.hadoop.hbase.regionserver.wal.WALEdit;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.hadoop.yarn.server.timelineservice.storage.common.HBaseTimelineStorageUtils;
 import org.apache.hadoop.yarn.server.timelineservice.storage.common.TimestampGenerator;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 /**
  * Coprocessor for flow run table.
  */
 public class FlowRunCoprocessor extends BaseRegionObserver {
 
-  private static final Log LOG = LogFactory.getLog(FlowRunCoprocessor.class);
+  private static final Logger LOG =
+      LoggerFactory.getLogger(FlowRunCoprocessor.class);
   private boolean isFlowRunRegion = false;
 
   private Region region;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/63cfcb90/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/flow/FlowRunTable.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/flow/FlowRunTable.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/flow/FlowRunTable.java
index 547bef0..9c6549f 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/flow/FlowRunTable.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/flow/FlowRunTable.java
@@ -19,8 +19,6 @@ package org.apache.hadoop.yarn.server.timelineservice.storage.flow;
 
 import java.io.IOException;
 
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.HColumnDescriptor;
 import org.apache.hadoop.hbase.HTableDescriptor;
@@ -29,6 +27,8 @@ import org.apache.hadoop.hbase.client.Admin;
 import org.apache.hadoop.hbase.regionserver.BloomType;
 import org.apache.hadoop.yarn.conf.YarnConfiguration;
 import org.apache.hadoop.yarn.server.timelineservice.storage.common.BaseTable;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 /**
  * The flow run table has column family info
@@ -94,7 +94,8 @@ public class FlowRunTable extends BaseTable<FlowRunTable> {
   /** default value for flowrun table name. */
   public static final String DEFAULT_TABLE_NAME = "timelineservice.flowrun";
 
-  private static final Log LOG = LogFactory.getLog(FlowRunTable.class);
+  private static final Logger LOG =
+      LoggerFactory.getLogger(FlowRunTable.class);
 
   /** default max number of versions. */
   public static final int DEFAULT_METRICS_MAX_VERSIONS = Integer.MAX_VALUE;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/63cfcb90/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/flow/FlowScanner.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/flow/FlowScanner.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/flow/FlowScanner.java
index 0e3c8ee..dbd0484 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/flow/FlowScanner.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/flow/FlowScanner.java
@@ -27,8 +27,6 @@ import java.util.Set;
 import java.util.SortedSet;
 import java.util.TreeSet;
 
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.Cell;
 import org.apache.hadoop.hbase.CellUtil;
@@ -52,6 +50,8 @@ import org.apache.hadoop.yarn.server.timelineservice.storage.common.TimestampGen
 import org.apache.hadoop.yarn.server.timelineservice.storage.common.ValueConverter;
 
 import com.google.common.annotations.VisibleForTesting;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 /**
  * Invoked via the coprocessor when a Get or a Scan is issued for flow run
@@ -62,7 +62,8 @@ import com.google.common.annotations.VisibleForTesting;
  */
 class FlowScanner implements RegionScanner, Closeable {
 
-  private static final Log LOG = LogFactory.getLog(FlowScanner.class);
+  private static final Logger LOG =
+      LoggerFactory.getLogger(FlowScanner.class);
 
   /**
    * use a special application id to represent the flow id this is needed since

http://git-wip-us.apache.org/repos/asf/hadoop/blob/63cfcb90/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/reader/TimelineEntityReader.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/reader/TimelineEntityReader.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/reader/TimelineEntityReader.java
index 7b294a8..424d141 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/reader/TimelineEntityReader.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/reader/TimelineEntityReader.java
@@ -27,8 +27,6 @@ import java.util.NavigableSet;
 import java.util.Set;
 import java.util.TreeSet;
 
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.client.Connection;
 import org.apache.hadoop.hbase.client.Result;
@@ -54,6 +52,8 @@ import org.apache.hadoop.yarn.server.timelineservice.storage.common.KeyConverter
 import org.apache.hadoop.yarn.server.timelineservice.storage.common.Separator;
 import org.apache.hadoop.yarn.server.timelineservice.storage.common.StringKeyConverter;
 import org.apache.hadoop.yarn.server.timelineservice.storage.entity.EntityColumnPrefix;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 /**
  * The base class for reading and deserializing timeline entities from the
@@ -61,7 +61,8 @@ import org.apache.hadoop.yarn.server.timelineservice.storage.entity.EntityColumn
  * entities that are being requested.
  */
 public abstract class TimelineEntityReader {
-  private static final Log LOG = LogFactory.getLog(TimelineEntityReader.class);
+  private static final Logger LOG =
+      LoggerFactory.getLogger(TimelineEntityReader.class);
 
   private final boolean singleEntityRead;
   private TimelineReaderContext context;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/63cfcb90/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/collector/AppLevelTimelineCollector.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/collector/AppLevelTimelineCollector.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/collector/AppLevelTimelineCollector.java
index 0b05309..56f7b2b 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/collector/AppLevelTimelineCollector.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/collector/AppLevelTimelineCollector.java
@@ -19,8 +19,6 @@
 package org.apache.hadoop.yarn.server.timelineservice.collector;
 
 import com.google.common.util.concurrent.ThreadFactoryBuilder;
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.classification.InterfaceAudience.Private;
 import org.apache.hadoop.classification.InterfaceStability.Unstable;
 import org.apache.hadoop.conf.Configuration;
@@ -32,6 +30,8 @@ import org.apache.hadoop.yarn.api.records.timelineservice.TimelineEntityType;
 import org.apache.hadoop.yarn.conf.YarnConfiguration;
 
 import com.google.common.base.Preconditions;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 import java.util.HashSet;
 import java.util.Map;
@@ -48,7 +48,8 @@ import java.util.concurrent.TimeUnit;
 @Private
 @Unstable
 public class AppLevelTimelineCollector extends TimelineCollector {
-  private static final Log LOG = LogFactory.getLog(TimelineCollector.class);
+  private static final Logger LOG =
+      LoggerFactory.getLogger(TimelineCollector.class);
 
   private final static int AGGREGATION_EXECUTOR_NUM_THREADS = 1;
   private final static int AGGREGATION_EXECUTOR_EXEC_INTERVAL_SECS = 15;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/63cfcb90/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/collector/NodeTimelineCollectorManager.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/collector/NodeTimelineCollectorManager.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/collector/NodeTimelineCollectorManager.java
index 0323d7b..1719782 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/collector/NodeTimelineCollectorManager.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/collector/NodeTimelineCollectorManager.java
@@ -26,8 +26,6 @@ import java.net.InetSocketAddress;
 import java.net.URI;
 import java.util.HashMap;
 
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.classification.InterfaceAudience.Private;
 import org.apache.hadoop.classification.InterfaceStability.Unstable;
 import org.apache.hadoop.conf.Configuration;
@@ -47,6 +45,8 @@ import org.apache.hadoop.yarn.webapp.YarnJacksonJaxbJsonProvider;
 import org.apache.hadoop.yarn.webapp.util.WebAppUtils;
 
 import com.google.common.annotations.VisibleForTesting;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 /**
  * Class on the NodeManager side that manages adding and removing collectors and
@@ -55,8 +55,8 @@ import com.google.common.annotations.VisibleForTesting;
 @Private
 @Unstable
 public class NodeTimelineCollectorManager extends TimelineCollectorManager {
-  private static final Log LOG =
-      LogFactory.getLog(NodeTimelineCollectorManager.class);
+  private static final Logger LOG =
+      LoggerFactory.getLogger(NodeTimelineCollectorManager.class);
 
   // REST server for this collector manager.
   private HttpServer2 timelineRestServer;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/63cfcb90/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/collector/PerNodeTimelineCollectorsAuxService.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/collector/PerNodeTimelineCollectorsAuxService.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/collector/PerNodeTimelineCollectorsAuxService.java
index 266bd04..e4e6421 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/collector/PerNodeTimelineCollectorsAuxService.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/collector/PerNodeTimelineCollectorsAuxService.java
@@ -23,8 +23,6 @@ import java.util.concurrent.Executors;
 import java.util.concurrent.ScheduledExecutorService;
 import java.util.concurrent.TimeUnit;
 
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.classification.InterfaceAudience.Private;
 import org.apache.hadoop.classification.InterfaceStability.Unstable;
 import org.apache.hadoop.conf.Configuration;
@@ -43,6 +41,8 @@ import org.apache.hadoop.yarn.server.api.ContainerTerminationContext;
 import org.apache.hadoop.yarn.server.api.ContainerType;
 
 import com.google.common.annotations.VisibleForTesting;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 /**
  * The top-level server for the per-node timeline collector manager. Currently
@@ -52,8 +52,8 @@ import com.google.common.annotations.VisibleForTesting;
 @Private
 @Unstable
 public class PerNodeTimelineCollectorsAuxService extends AuxiliaryService {
-  private static final Log LOG =
-      LogFactory.getLog(PerNodeTimelineCollectorsAuxService.class);
+  private static final Logger LOG =
+      LoggerFactory.getLogger(PerNodeTimelineCollectorsAuxService.class);
   private static final int SHUTDOWN_HOOK_PRIORITY = 30;
 
   private final NodeTimelineCollectorManager collectorManager;
@@ -209,7 +209,7 @@ public class PerNodeTimelineCollectorsAuxService extends AuxiliaryService {
       auxService.init(conf);
       auxService.start();
     } catch (Throwable t) {
-      LOG.fatal("Error starting PerNodeTimelineCollectorServer", t);
+      LOG.error("Error starting PerNodeTimelineCollectorServer", t);
       ExitUtil.terminate(-1, "Error starting PerNodeTimelineCollectorServer");
     }
     return auxService;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/63cfcb90/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/collector/TimelineCollector.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/collector/TimelineCollector.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/collector/TimelineCollector.java
index 5416b26..37387f1 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/collector/TimelineCollector.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/collector/TimelineCollector.java
@@ -26,8 +26,6 @@ import java.util.Set;
 import java.util.concurrent.ConcurrentHashMap;
 import java.util.concurrent.ConcurrentMap;
 
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.classification.InterfaceAudience.Private;
 import org.apache.hadoop.classification.InterfaceStability.Unstable;
 import org.apache.hadoop.conf.Configuration;
@@ -39,6 +37,8 @@ import org.apache.hadoop.yarn.api.records.timelineservice.TimelineEntity;
 import org.apache.hadoop.yarn.api.records.timelineservice.TimelineMetric;
 import org.apache.hadoop.yarn.api.records.timelineservice.TimelineWriteResponse;
 import org.apache.hadoop.yarn.server.timelineservice.storage.TimelineWriter;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 /**
  * Service that handles writes to the timeline service and writes them to the
@@ -51,7 +51,8 @@ import org.apache.hadoop.yarn.server.timelineservice.storage.TimelineWriter;
 @Unstable
 public abstract class TimelineCollector extends CompositeService {
 
-  private static final Log LOG = LogFactory.getLog(TimelineCollector.class);
+  private static final Logger LOG =
+      LoggerFactory.getLogger(TimelineCollector.class);
   public static final String SEPARATOR = "_";
 
   private TimelineWriter writer;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/63cfcb90/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/collector/TimelineCollectorManager.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/collector/TimelineCollectorManager.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/collector/TimelineCollectorManager.java
index 07cbb2b..94b95ad 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/collector/TimelineCollectorManager.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/collector/TimelineCollectorManager.java
@@ -26,8 +26,6 @@ import java.util.concurrent.Executors;
 import java.util.concurrent.ScheduledExecutorService;
 import java.util.concurrent.TimeUnit;
 
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configuration;
@@ -39,6 +37,8 @@ import org.apache.hadoop.yarn.exceptions.YarnRuntimeException;
 import org.apache.hadoop.yarn.server.timelineservice.storage.TimelineWriter;
 
 import com.google.common.annotations.VisibleForTesting;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 /**
  * Class that manages adding and removing collectors and their lifecycle. It
@@ -48,8 +48,8 @@ import com.google.common.annotations.VisibleForTesting;
 @InterfaceAudience.Private
 @InterfaceStability.Unstable
 public class TimelineCollectorManager extends AbstractService {
-  private static final Log LOG =
-      LogFactory.getLog(TimelineCollectorManager.class);
+  private static final Logger LOG =
+      LoggerFactory.getLogger(TimelineCollectorManager.class);
 
   private TimelineWriter writer;
   private ScheduledExecutorService writerFlusher;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/63cfcb90/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/collector/TimelineCollectorWebService.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/collector/TimelineCollectorWebService.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/collector/TimelineCollectorWebService.java
index fe04b7a..efb5d6b 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/collector/TimelineCollectorWebService.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/collector/TimelineCollectorWebService.java
@@ -36,8 +36,6 @@ import javax.xml.bind.annotation.XmlAccessorType;
 import javax.xml.bind.annotation.XmlElement;
 import javax.xml.bind.annotation.XmlRootElement;
 
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.classification.InterfaceAudience.Private;
 import org.apache.hadoop.classification.InterfaceAudience.Public;
 import org.apache.hadoop.classification.InterfaceStability.Unstable;
@@ -58,6 +56,8 @@ import org.apache.hadoop.yarn.webapp.ForbiddenException;
 import org.apache.hadoop.yarn.webapp.NotFoundException;
 
 import com.google.inject.Singleton;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 /**
  * The main per-node REST end point for timeline service writes. It is
@@ -69,8 +69,8 @@ import com.google.inject.Singleton;
 @Singleton
 @Path("/ws/v2/timeline")
 public class TimelineCollectorWebService {
-  private static final Log LOG =
-      LogFactory.getLog(TimelineCollectorWebService.class);
+  private static final Logger LOG =
+      LoggerFactory.getLogger(TimelineCollectorWebService.class);
 
   private @Context ServletContext context;
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/63cfcb90/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/reader/TimelineReaderServer.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/reader/TimelineReaderServer.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/reader/TimelineReaderServer.java
index 2faf4b6..d7eff32 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/reader/TimelineReaderServer.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/reader/TimelineReaderServer.java
@@ -25,8 +25,6 @@ import java.net.URI;
 import java.util.HashMap;
 import java.util.Map;
 
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.classification.InterfaceAudience.Private;
 import org.apache.hadoop.classification.InterfaceStability.Unstable;
 import org.apache.hadoop.conf.Configuration;
@@ -48,12 +46,15 @@ import org.apache.hadoop.yarn.webapp.YarnJacksonJaxbJsonProvider;
 import org.apache.hadoop.yarn.webapp.util.WebAppUtils;
 
 import com.google.common.annotations.VisibleForTesting;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 /** Main class for Timeline Reader. */
 @Private
 @Unstable
 public class TimelineReaderServer extends CompositeService {
-  private static final Log LOG = LogFactory.getLog(TimelineReaderServer.class);
+  private static final Logger LOG =
+      LoggerFactory.getLogger(TimelineReaderServer.class);
   private static final int SHUTDOWN_HOOK_PRIORITY = 30;
   static final String TIMELINE_READER_MANAGER_ATTR =
       "timeline.reader.manager";
@@ -203,7 +204,7 @@ public class TimelineReaderServer extends CompositeService {
       timelineReaderServer.init(conf);
       timelineReaderServer.start();
     } catch (Throwable t) {
-      LOG.fatal("Error starting TimelineReaderWebServer", t);
+      LOG.error("Error starting TimelineReaderWebServer", t);
       ExitUtil.terminate(-1, "Error starting TimelineReaderWebServer");
     }
     return timelineReaderServer;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/63cfcb90/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/reader/TimelineReaderWebServices.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/reader/TimelineReaderWebServices.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/reader/TimelineReaderWebServices.java
index 139a1be..b3e3cdc 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/reader/TimelineReaderWebServices.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/reader/TimelineReaderWebServices.java
@@ -40,8 +40,6 @@ import javax.ws.rs.core.Context;
 import javax.ws.rs.core.MediaType;
 import javax.ws.rs.core.Response;
 
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.classification.InterfaceAudience.Private;
 import org.apache.hadoop.classification.InterfaceStability.Unstable;
 import org.apache.hadoop.http.JettyUtils;
@@ -57,6 +55,8 @@ import org.apache.hadoop.yarn.webapp.NotFoundException;
 
 import com.google.common.annotations.VisibleForTesting;
 import com.google.inject.Singleton;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 /** REST end point for Timeline Reader. */
 @Private
@@ -64,8 +64,8 @@ import com.google.inject.Singleton;
 @Singleton
 @Path("/ws/v2/timeline")
 public class TimelineReaderWebServices {
-  private static final Log LOG =
-      LogFactory.getLog(TimelineReaderWebServices.class);
+  private static final Logger LOG =
+      LoggerFactory.getLogger(TimelineReaderWebServices.class);
 
   @Context private ServletContext ctxt;
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/63cfcb90/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/FileSystemTimelineReaderImpl.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/FileSystemTimelineReaderImpl.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/FileSystemTimelineReaderImpl.java
index 967702b..b4e792b 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/FileSystemTimelineReaderImpl.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/FileSystemTimelineReaderImpl.java
@@ -39,8 +39,6 @@ import com.fasterxml.jackson.databind.ObjectMapper;
 import org.apache.commons.csv.CSVFormat;
 import org.apache.commons.csv.CSVParser;
 import org.apache.commons.csv.CSVRecord;
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.service.AbstractService;
 import org.apache.hadoop.yarn.api.records.timelineservice.TimelineEntity;
@@ -54,6 +52,8 @@ import org.apache.hadoop.yarn.server.timelineservice.storage.common.TimelineStor
 import org.apache.hadoop.yarn.webapp.YarnJacksonJaxbJsonProvider;
 
 import com.google.common.annotations.VisibleForTesting;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 /**
  *  File System based implementation for TimelineReader. This implementation may
@@ -64,8 +64,8 @@ import com.google.common.annotations.VisibleForTesting;
 public class FileSystemTimelineReaderImpl extends AbstractService
     implements TimelineReader {
 
-  private static final Log LOG =
-      LogFactory.getLog(FileSystemTimelineReaderImpl.class);
+  private static final Logger LOG =
+      LoggerFactory.getLogger(FileSystemTimelineReaderImpl.class);
 
   private String rootPath;
   private static final String ENTITIES_DIR = "entities";

http://git-wip-us.apache.org/repos/asf/hadoop/blob/63cfcb90/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/common/TimelineStorageUtils.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/common/TimelineStorageUtils.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/common/TimelineStorageUtils.java
index 9b83659..7f7d640 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/common/TimelineStorageUtils.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/common/TimelineStorageUtils.java
@@ -23,8 +23,6 @@ import java.util.HashSet;
 import java.util.Map;
 import java.util.Set;
 
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.classification.InterfaceAudience.Public;
 import org.apache.hadoop.classification.InterfaceStability.Unstable;
 import org.apache.hadoop.yarn.api.records.timelineservice.TimelineEntity;
@@ -48,8 +46,6 @@ public final class TimelineStorageUtils {
   private TimelineStorageUtils() {
   }
 
-  private static final Log LOG = LogFactory.getLog(TimelineStorageUtils.class);
-
   /**
    * Matches key-values filter. Used for relatesTo/isRelatedTo filters.
    *


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[21/50] [abbrv] hadoop git commit: MAPREDUCE-6927. MR job should only set tracking url if history was successfully written. Contributed by Eric Badger

Posted by wa...@apache.org.
MAPREDUCE-6927. MR job should only set tracking url if history was successfully written. Contributed by Eric Badger


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/735fce5b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/735fce5b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/735fce5b

Branch: refs/heads/YARN-5881
Commit: 735fce5bec17f4e1799daf922625c475cf588114
Parents: acf9bd8
Author: Jason Lowe <jl...@yahoo-inc.com>
Authored: Tue Aug 8 14:46:47 2017 -0500
Committer: Jason Lowe <jl...@yahoo-inc.com>
Committed: Tue Aug 8 14:46:47 2017 -0500

----------------------------------------------------------------------
 .../jobhistory/JobHistoryEventHandler.java      |  27 +++--
 .../hadoop/mapreduce/v2/app/AppContext.java     |   4 +
 .../hadoop/mapreduce/v2/app/MRAppMaster.java    |  11 ++
 .../mapreduce/v2/app/rm/RMCommunicator.java     |   4 +-
 .../jobhistory/TestJobHistoryEventHandler.java  | 102 +++++++++++++++++++
 .../hadoop/mapreduce/v2/app/MockAppContext.java |  10 ++
 .../mapreduce/v2/app/TestRuntimeEstimators.java |  10 ++
 .../hadoop/mapreduce/v2/hs/JobHistory.java      |  10 ++
 8 files changed, 168 insertions(+), 10 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/735fce5b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/jobhistory/JobHistoryEventHandler.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/jobhistory/JobHistoryEventHandler.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/jobhistory/JobHistoryEventHandler.java
index 285d36e..53fe055 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/jobhistory/JobHistoryEventHandler.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/jobhistory/JobHistoryEventHandler.java
@@ -63,6 +63,7 @@ import org.apache.hadoop.mapreduce.v2.jobhistory.FileNameIndexUtils;
 import org.apache.hadoop.mapreduce.v2.jobhistory.JHAdminConfig;
 import org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils;
 import org.apache.hadoop.mapreduce.v2.jobhistory.JobIndexInfo;
+import org.apache.hadoop.mapreduce.v2.util.MRWebAppUtil;
 import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.service.AbstractService;
 import org.apache.hadoop.util.StringUtils;
@@ -1404,7 +1405,12 @@ public class JobHistoryEventHandler extends AbstractService
         qualifiedDoneFile =
             doneDirFS.makeQualified(new Path(doneDirPrefixPath,
                 doneJobHistoryFileName));
-        moveToDoneNow(qualifiedLogFile, qualifiedDoneFile);
+        if(moveToDoneNow(qualifiedLogFile, qualifiedDoneFile)) {
+          String historyUrl = MRWebAppUtil.getApplicationWebURLOnJHSWithScheme(
+              getConfig(), context.getApplicationID());
+          context.setHistoryUrl(historyUrl);
+          LOG.info("Set historyUrl to " + historyUrl);
+        }
       }
 
       // Move confFile to Done Folder
@@ -1610,7 +1616,7 @@ public class JobHistoryEventHandler extends AbstractService
     }
   }
 
-  private void moveTmpToDone(Path tmpPath) throws IOException {
+  protected void moveTmpToDone(Path tmpPath) throws IOException {
     if (tmpPath != null) {
       String tmpFileName = tmpPath.getName();
       String fileName = getFileNameFromTmpFN(tmpFileName);
@@ -1622,7 +1628,9 @@ public class JobHistoryEventHandler extends AbstractService
   
   // TODO If the FS objects are the same, this should be a rename instead of a
   // copy.
-  private void moveToDoneNow(Path fromPath, Path toPath) throws IOException {
+  protected boolean moveToDoneNow(Path fromPath, Path toPath)
+      throws IOException {
+    boolean success = false;
     // check if path exists, in case of retries it may not exist
     if (stagingDirFS.exists(fromPath)) {
       LOG.info("Copying " + fromPath.toString() + " to " + toPath.toString());
@@ -1631,13 +1639,18 @@ public class JobHistoryEventHandler extends AbstractService
       boolean copied = FileUtil.copy(stagingDirFS, fromPath, doneDirFS, toPath,
           false, getConfig());
 
-      if (copied)
-        LOG.info("Copied to done location: " + toPath);
-      else 
-        LOG.info("copy failed");
       doneDirFS.setPermission(toPath, new FsPermission(
           JobHistoryUtils.HISTORY_INTERMEDIATE_FILE_PERMISSIONS));
+      if (copied) {
+        LOG.info("Copied from: " + fromPath.toString()
+            + " to done location: " + toPath.toString());
+        success = true;
+      } else {
+        LOG.info("Copy failed from: " + fromPath.toString()
+            + " to done location: " + toPath.toString());
+      }
     }
+    return success;
   }
 
   private String getTempFileName(String srcFile) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/735fce5b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/AppContext.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/AppContext.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/AppContext.java
index ddf4fa7..4a21396 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/AppContext.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/AppContext.java
@@ -69,4 +69,8 @@ public interface AppContext {
   String getNMHostname();
 
   TaskAttemptFinishingMonitor getTaskAttemptFinishingMonitor();
+
+  String getHistoryUrl();
+
+  void setHistoryUrl(String historyUrl);
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/735fce5b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/MRAppMaster.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/MRAppMaster.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/MRAppMaster.java
index 8c9f605..f511f19 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/MRAppMaster.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/MRAppMaster.java
@@ -1078,6 +1078,7 @@ public class MRAppMaster extends CompositeService {
     private final ClientToAMTokenSecretManager clientToAMTokenSecretManager;
     private TimelineClient timelineClient = null;
     private TimelineV2Client timelineV2Client = null;
+    private String historyUrl = null;
 
     private final TaskAttemptFinishingMonitor taskAttemptFinishingMonitor;
 
@@ -1197,6 +1198,16 @@ public class MRAppMaster extends CompositeService {
     public TimelineV2Client getTimelineV2Client() {
       return timelineV2Client;
     }
+
+    @Override
+    public String getHistoryUrl() {
+      return historyUrl;
+    }
+
+    @Override
+    public void setHistoryUrl(String historyUrl) {
+      this.historyUrl = historyUrl;
+    }
   }
 
   @SuppressWarnings("unchecked")

http://git-wip-us.apache.org/repos/asf/hadoop/blob/735fce5b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/rm/RMCommunicator.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/rm/RMCommunicator.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/rm/RMCommunicator.java
index 6cec2f3..a7058e0 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/rm/RMCommunicator.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/rm/RMCommunicator.java
@@ -215,9 +215,7 @@ public abstract class RMCommunicator extends AbstractService
     }
     LOG.info("Setting job diagnostics to " + sb.toString());
 
-    String historyUrl =
-        MRWebAppUtil.getApplicationWebURLOnJHSWithScheme(getConfig(),
-            context.getApplicationID());
+    String historyUrl = context.getHistoryUrl();
     LOG.info("History url is " + historyUrl);
     FinishApplicationMasterRequest request =
         FinishApplicationMasterRequest.newInstance(finishState,

http://git-wip-us.apache.org/repos/asf/hadoop/blob/735fce5b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/jobhistory/TestJobHistoryEventHandler.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/jobhistory/TestJobHistoryEventHandler.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/jobhistory/TestJobHistoryEventHandler.java
index 6c5e604..caf8c67 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/jobhistory/TestJobHistoryEventHandler.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/jobhistory/TestJobHistoryEventHandler.java
@@ -21,6 +21,9 @@ package org.apache.hadoop.mapreduce.jobhistory;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
 import static org.mockito.Matchers.any;
+import static org.mockito.Mockito.doNothing;
+import static org.mockito.Mockito.doReturn;
+import static org.mockito.Mockito.doThrow;
 import static org.mockito.Mockito.mock;
 import static org.mockito.Mockito.spy;
 import static org.mockito.Mockito.times;
@@ -62,6 +65,7 @@ import org.apache.hadoop.mapreduce.v2.app.job.JobStateInternal;
 import org.apache.hadoop.mapreduce.v2.jobhistory.JHAdminConfig;
 import org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils;
 import org.apache.hadoop.mapreduce.v2.util.MRBuilderUtils;
+import org.apache.hadoop.mapreduce.v2.util.MRWebAppUtil;
 import org.apache.hadoop.security.authorize.AccessControlList;
 import org.apache.hadoop.yarn.api.records.ApplicationAttemptId;
 import org.apache.hadoop.yarn.api.records.ApplicationId;
@@ -920,6 +924,104 @@ public class TestJobHistoryEventHandler {
         jheh.lastEventHandled.getHistoryEvent()
         instanceof JobUnsuccessfulCompletionEvent);
   }
+
+  @Test (timeout=50000)
+  public void testSetTrackingURLAfterHistoryIsWritten() throws Exception {
+    TestParams t = new TestParams(true);
+    Configuration conf = new Configuration();
+
+    JHEvenHandlerForTest realJheh =
+        new JHEvenHandlerForTest(t.mockAppContext, 0, false);
+    JHEvenHandlerForTest jheh = spy(realJheh);
+    jheh.init(conf);
+
+    try {
+      jheh.start();
+      handleEvent(jheh, new JobHistoryEvent(t.jobId, new AMStartedEvent(
+          t.appAttemptId, 200, t.containerId, "nmhost", 3000, 4000, -1)));
+      verify(jheh, times(0)).processDoneFiles(any(JobId.class));
+      verify(t.mockAppContext, times(0)).setHistoryUrl(any(String.class));
+
+      // Job finishes and successfully writes history
+      handleEvent(jheh, new JobHistoryEvent(t.jobId, new JobFinishedEvent(
+          TypeConverter.fromYarn(t.jobId), 0, 0, 0, 0, 0, new Counters(),
+          new Counters(), new Counters())));
+
+      verify(jheh, times(1)).processDoneFiles(any(JobId.class));
+      String historyUrl = MRWebAppUtil.getApplicationWebURLOnJHSWithScheme(
+          conf, t.mockAppContext.getApplicationID());
+      verify(t.mockAppContext, times(1)).setHistoryUrl(historyUrl);
+    } finally {
+      jheh.stop();
+    }
+  }
+
+  @Test (timeout=50000)
+  public void testDontSetTrackingURLIfHistoryWriteFailed() throws Exception {
+    TestParams t = new TestParams(true);
+    Configuration conf = new Configuration();
+
+    JHEvenHandlerForTest realJheh =
+        new JHEvenHandlerForTest(t.mockAppContext, 0, false);
+    JHEvenHandlerForTest jheh = spy(realJheh);
+    jheh.init(conf);
+
+    try {
+      jheh.start();
+      doReturn(false).when(jheh).moveToDoneNow(any(Path.class),
+          any(Path.class));
+      doNothing().when(jheh).moveTmpToDone(any(Path.class));
+      handleEvent(jheh, new JobHistoryEvent(t.jobId, new AMStartedEvent(
+          t.appAttemptId, 200, t.containerId, "nmhost", 3000, 4000, -1)));
+      verify(jheh, times(0)).processDoneFiles(any(JobId.class));
+      verify(t.mockAppContext, times(0)).setHistoryUrl(any(String.class));
+
+      // Job finishes, but doesn't successfully write history
+      handleEvent(jheh, new JobHistoryEvent(t.jobId, new JobFinishedEvent(
+          TypeConverter.fromYarn(t.jobId), 0, 0, 0, 0, 0, new Counters(),
+          new Counters(), new Counters())));
+      verify(jheh, times(1)).processDoneFiles(any(JobId.class));
+      verify(t.mockAppContext, times(0)).setHistoryUrl(any(String.class));
+
+    } finally {
+      jheh.stop();
+    }
+  }
+  @Test (timeout=50000)
+  public void testDontSetTrackingURLIfHistoryWriteThrows() throws Exception {
+    TestParams t = new TestParams(true);
+    Configuration conf = new Configuration();
+
+    JHEvenHandlerForTest realJheh =
+        new JHEvenHandlerForTest(t.mockAppContext, 0, false);
+    JHEvenHandlerForTest jheh = spy(realJheh);
+    jheh.init(conf);
+
+    try {
+      jheh.start();
+      doThrow(new YarnRuntimeException(new IOException()))
+          .when(jheh).processDoneFiles(any(JobId.class));
+      handleEvent(jheh, new JobHistoryEvent(t.jobId, new AMStartedEvent(
+          t.appAttemptId, 200, t.containerId, "nmhost", 3000, 4000, -1)));
+      verify(jheh, times(0)).processDoneFiles(any(JobId.class));
+      verify(t.mockAppContext, times(0)).setHistoryUrl(any(String.class));
+
+      // Job finishes, but doesn't successfully write history
+      try {
+        handleEvent(jheh, new JobHistoryEvent(t.jobId, new JobFinishedEvent(
+            TypeConverter.fromYarn(t.jobId), 0, 0, 0, 0, 0, new Counters(),
+            new Counters(), new Counters())));
+        throw new RuntimeException(
+            "processDoneFiles didn't throw, but should have");
+      } catch (YarnRuntimeException yre) {
+        // Exception expected, do nothing
+      }
+      verify(jheh, times(1)).processDoneFiles(any(JobId.class));
+      verify(t.mockAppContext, times(0)).setHistoryUrl(any(String.class));
+    } finally {
+      jheh.stop();
+    }
+  }
 }
 
 class JHEvenHandlerForTest extends JobHistoryEventHandler {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/735fce5b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/MockAppContext.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/MockAppContext.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/MockAppContext.java
index 4e31b63..0686633 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/MockAppContext.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/MockAppContext.java
@@ -154,4 +154,14 @@ public class MockAppContext implements AppContext {
       return null;
   }
 
+  @Override
+  public String getHistoryUrl() {
+    return null;
+  }
+
+  @Override
+  public void setHistoryUrl(String historyUrl) {
+    return;
+  }
+
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/735fce5b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestRuntimeEstimators.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestRuntimeEstimators.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestRuntimeEstimators.java
index 8c7f0db..301d498 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestRuntimeEstimators.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestRuntimeEstimators.java
@@ -896,5 +896,15 @@ public class TestRuntimeEstimators {
     public TaskAttemptFinishingMonitor getTaskAttemptFinishingMonitor() {
       return null;
     }
+
+    @Override
+    public String getHistoryUrl() {
+      return null;
+    }
+
+    @Override
+    public void setHistoryUrl(String historyUrl) {
+      return;
+    }
   }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/735fce5b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/JobHistory.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/JobHistory.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/JobHistory.java
index c5a40b2..2671df4 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/JobHistory.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/JobHistory.java
@@ -407,4 +407,14 @@ public class JobHistory extends AbstractService implements HistoryContext {
   public TaskAttemptFinishingMonitor getTaskAttemptFinishingMonitor() {
     return null;
   }
+
+  @Override
+  public String getHistoryUrl() {
+    return null;
+  }
+
+  @Override
+  public void setHistoryUrl(String historyUrl) {
+    return;
+  }
 }


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[50/50] [abbrv] hadoop git commit: YARN-6471. Support to add min/max resource configuration for a queue. (Sunil G via wangda)

Posted by wa...@apache.org.
YARN-6471. Support to add min/max resource configuration for a queue. (Sunil G via wangda)

Change-Id: I9213f5297a6841fab5c573e85ee4c4e5f4a0b7ff


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/95a81934
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/95a81934
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/95a81934

Branch: refs/heads/YARN-5881
Commit: 95a81934385a1a0f404930b8075e2a066fc6c413
Parents: 4222c97
Author: Wangda Tan <wa...@apache.org>
Authored: Fri Aug 11 10:30:23 2017 -0700
Committer: Wangda Tan <wa...@apache.org>
Committed: Fri Aug 11 10:30:23 2017 -0700

----------------------------------------------------------------------
 .../org/apache/hadoop/util/StringUtils.java     |  31 ++
 .../hadoop/yarn/util/UnitsConversionUtil.java   | 217 ++++++++
 .../resource/DefaultResourceCalculator.java     |   6 +
 .../resource/DominantResourceCalculator.java    |   7 +
 .../yarn/util/resource/ResourceCalculator.java  |  12 +
 .../hadoop/yarn/util/resource/Resources.java    |   5 +
 .../capacity/FifoCandidatesSelector.java        |   9 +-
 .../ProportionalCapacityPreemptionPolicy.java   |  10 +-
 .../monitor/capacity/TempQueuePerPartition.java |  16 +-
 .../scheduler/AbstractResourceUsage.java        | 198 +++++++
 .../scheduler/QueueResourceQuotas.java          | 153 ++++++
 .../scheduler/ResourceUsage.java                | 237 ++-------
 .../scheduler/capacity/AbstractCSQueue.java     | 162 +++++-
 .../scheduler/capacity/CSQueue.java             |  42 +-
 .../scheduler/capacity/CSQueueUtils.java        |  24 +-
 .../CapacitySchedulerConfiguration.java         | 179 ++++++-
 .../scheduler/capacity/LeafQueue.java           |  31 +-
 .../scheduler/capacity/ParentQueue.java         | 203 +++++++-
 .../scheduler/capacity/UsersManager.java        |   5 +-
 .../PriorityUtilizationQueueOrderingPolicy.java |  11 +
 .../webapp/dao/CapacitySchedulerQueueInfo.java  |  15 +
 .../yarn/server/resourcemanager/MockNM.java     |   8 +
 .../yarn/server/resourcemanager/MockRM.java     |   6 +
 ...alCapacityPreemptionPolicyMockFramework.java |  13 +
 ...estProportionalCapacityPreemptionPolicy.java |  29 +-
 ...pacityPreemptionPolicyIntraQueueWithDRF.java |   6 +-
 .../TestAbsoluteResourceConfiguration.java      | 516 +++++++++++++++++++
 .../capacity/TestApplicationLimits.java         |  30 +-
 .../TestApplicationLimitsByPartition.java       |   4 +
 .../capacity/TestCapacityScheduler.java         |   2 +-
 .../scheduler/capacity/TestChildQueueOrder.java |   2 +
 .../scheduler/capacity/TestLeafQueue.java       | 261 ++++------
 .../scheduler/capacity/TestParentQueue.java     |   8 +
 .../scheduler/capacity/TestReservations.java    |  17 +
 ...tPriorityUtilizationQueueOrderingPolicy.java |   3 +
 .../webapp/TestRMWebServicesCapacitySched.java  |   4 +-
 36 files changed, 2046 insertions(+), 436 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/95a81934/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java
index cda5ec7..1be8a08 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java
@@ -1152,4 +1152,35 @@ public class StringUtils {
     return s1.equalsIgnoreCase(s2);
   }
 
+  /**
+   * <p>Checks if the String contains only unicode letters.</p>
+   *
+   * <p><code>null</code> will return <code>false</code>.
+   * An empty String (length()=0) will return <code>true</code>.</p>
+   *
+   * <pre>
+   * StringUtils.isAlpha(null)   = false
+   * StringUtils.isAlpha("")     = true
+   * StringUtils.isAlpha("  ")   = false
+   * StringUtils.isAlpha("abc")  = true
+   * StringUtils.isAlpha("ab2c") = false
+   * StringUtils.isAlpha("ab-c") = false
+   * </pre>
+   *
+   * @param str  the String to check, may be null
+   * @return <code>true</code> if only contains letters, and is non-null
+   */
+  public static boolean isAlpha(String str) {
+      if (str == null) {
+          return false;
+      }
+      int sz = str.length();
+      for (int i = 0; i < sz; i++) {
+          if (Character.isLetter(str.charAt(i)) == false) {
+              return false;
+          }
+      }
+      return true;
+  }
+
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/95a81934/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/UnitsConversionUtil.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/UnitsConversionUtil.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/UnitsConversionUtil.java
new file mode 100644
index 0000000..79ee0f7
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/UnitsConversionUtil.java
@@ -0,0 +1,217 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.yarn.util;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+
+import java.math.BigInteger;
+import java.util.*;
+
+/**
+ * A util to convert values in one unit to another. Units refers to whether
+ * the value is expressed in pico, nano, etc.
+ */
+@InterfaceAudience.Private
+@InterfaceStability.Unstable
+public class UnitsConversionUtil {
+
+  /**
+   * Helper class for encapsulating conversion values.
+   */
+  public static class Converter {
+    private long numerator;
+    private long denominator;
+
+    Converter(long n, long d) {
+      this.numerator = n;
+      this.denominator = d;
+    }
+  }
+
+  private static final String[] UNITS =
+      { "p", "n", "u", "m", "", "k", "M", "G", "T", "P", "Ki", "Mi", "Gi", "Ti",
+          "Pi" };
+  private static final List<String> SORTED_UNITS = Arrays.asList(UNITS);
+  public static final Set<String> KNOWN_UNITS = createKnownUnitsSet();
+  private static final Converter PICO =
+      new Converter(1L, 1000L * 1000L * 1000L * 1000L);
+  private static final Converter NANO =
+      new Converter(1L, 1000L * 1000L * 1000L);
+  private static final Converter MICRO = new Converter(1L, 1000L * 1000L);
+  private static final Converter MILLI = new Converter(1L, 1000L);
+  private static final Converter BASE = new Converter(1L, 1L);
+  private static final Converter KILO = new Converter(1000L, 1L);
+  private static final Converter MEGA = new Converter(1000L * 1000L, 1L);
+  private static final Converter GIGA =
+      new Converter(1000L * 1000L * 1000L, 1L);
+  private static final Converter TERA =
+      new Converter(1000L * 1000L * 1000L * 1000L, 1L);
+  private static final Converter PETA =
+      new Converter(1000L * 1000L * 1000L * 1000L * 1000L, 1L);
+
+  private static final Converter KILO_BINARY = new Converter(1024L, 1L);
+  private static final Converter MEGA_BINARY = new Converter(1024L * 1024L, 1L);
+  private static final Converter GIGA_BINARY =
+      new Converter(1024L * 1024L * 1024L, 1L);
+  private static final Converter TERA_BINARY =
+      new Converter(1024L * 1024L * 1024L * 1024L, 1L);
+  private static final Converter PETA_BINARY =
+      new Converter(1024L * 1024L * 1024L * 1024L * 1024L, 1L);
+
+  private static Set<String> createKnownUnitsSet() {
+    Set<String> ret = new HashSet<>();
+    ret.addAll(Arrays.asList(UNITS));
+    return ret;
+  }
+
+  private static Converter getConverter(String unit) {
+    switch (unit) {
+    case "p":
+      return PICO;
+    case "n":
+      return NANO;
+    case "u":
+      return MICRO;
+    case "m":
+      return MILLI;
+    case "":
+      return BASE;
+    case "k":
+      return KILO;
+    case "M":
+      return MEGA;
+    case "G":
+      return GIGA;
+    case "T":
+      return TERA;
+    case "P":
+      return PETA;
+    case "Ki":
+      return KILO_BINARY;
+    case "Mi":
+      return MEGA_BINARY;
+    case "Gi":
+      return GIGA_BINARY;
+    case "Ti":
+      return TERA_BINARY;
+    case "Pi":
+      return PETA_BINARY;
+    default:
+      throw new IllegalArgumentException(
+          "Unknown unit '" + unit + "'. Known units are " + KNOWN_UNITS);
+    }
+  }
+
+  /**
+   * Converts a value from one unit to another. Supported units can be obtained
+   * by inspecting the KNOWN_UNITS set.
+   *
+   * @param fromUnit  the unit of the from value
+   * @param toUnit    the target unit
+   * @param fromValue the value you wish to convert
+   * @return the value in toUnit
+   */
+  public static Long convert(String fromUnit, String toUnit, Long fromValue) {
+    if (toUnit == null || fromUnit == null || fromValue == null) {
+      throw new IllegalArgumentException("One or more arguments are null");
+    }
+    String overflowMsg =
+        "Converting " + fromValue + " from '" + fromUnit + "' to '" + toUnit
+            + "' will result in an overflow of Long";
+    if (fromUnit.equals(toUnit)) {
+      return fromValue;
+    }
+    Converter fc = getConverter(fromUnit);
+    Converter tc = getConverter(toUnit);
+    Long numerator = fc.numerator * tc.denominator;
+    Long denominator = fc.denominator * tc.numerator;
+    Long numeratorMultiplierLimit = Long.MAX_VALUE / numerator;
+    if (numerator < denominator) {
+      if (numeratorMultiplierLimit < fromValue) {
+        throw new IllegalArgumentException(overflowMsg);
+      }
+      return (fromValue * numerator) / denominator;
+    }
+    if (numeratorMultiplierLimit > fromValue) {
+      return (numerator * fromValue) / denominator;
+    }
+    Long tmp = numerator / denominator;
+    if ((Long.MAX_VALUE / tmp) < fromValue) {
+      throw new IllegalArgumentException(overflowMsg);
+    }
+    return fromValue * tmp;
+  }
+
+  /**
+   * Compare a value in a given unit with a value in another unit. The return
+   * value is equivalent to the value returned by compareTo.
+   *
+   * @param unitA  first unit
+   * @param valueA first value
+   * @param unitB  second unit
+   * @param valueB second value
+   * @return +1, 0 or -1 depending on whether the relationship is greater than,
+   * equal to or lesser than
+   */
+  public static int compare(String unitA, Long valueA, String unitB,
+      Long valueB) {
+    if (unitA == null || unitB == null || !KNOWN_UNITS.contains(unitA)
+        || !KNOWN_UNITS.contains(unitB)) {
+      throw new IllegalArgumentException("Units cannot be null");
+    }
+    if (!KNOWN_UNITS.contains(unitA)) {
+      throw new IllegalArgumentException("Unknown unit '" + unitA + "'");
+    }
+    if (!KNOWN_UNITS.contains(unitB)) {
+      throw new IllegalArgumentException("Unknown unit '" + unitB + "'");
+    }
+    Converter unitAC = getConverter(unitA);
+    Converter unitBC = getConverter(unitB);
+    if (unitA.equals(unitB)) {
+      return valueA.compareTo(valueB);
+    }
+    int unitAPos = SORTED_UNITS.indexOf(unitA);
+    int unitBPos = SORTED_UNITS.indexOf(unitB);
+    try {
+      Long tmpA = valueA;
+      Long tmpB = valueB;
+      if (unitAPos < unitBPos) {
+        tmpB = convert(unitB, unitA, valueB);
+      } else {
+        tmpA = convert(unitA, unitB, valueA);
+      }
+      return tmpA.compareTo(tmpB);
+    } catch (IllegalArgumentException ie) {
+      BigInteger tmpA = BigInteger.valueOf(valueA);
+      BigInteger tmpB = BigInteger.valueOf(valueB);
+      if (unitAPos < unitBPos) {
+        tmpB = tmpB.multiply(BigInteger.valueOf(unitBC.numerator));
+        tmpB = tmpB.multiply(BigInteger.valueOf(unitAC.denominator));
+        tmpB = tmpB.divide(BigInteger.valueOf(unitBC.denominator));
+        tmpB = tmpB.divide(BigInteger.valueOf(unitAC.numerator));
+      } else {
+        tmpA = tmpA.multiply(BigInteger.valueOf(unitAC.numerator));
+        tmpA = tmpA.multiply(BigInteger.valueOf(unitBC.denominator));
+        tmpA = tmpA.divide(BigInteger.valueOf(unitAC.denominator));
+        tmpA = tmpA.divide(BigInteger.valueOf(unitBC.numerator));
+      }
+      return tmpA.compareTo(tmpB);
+    }
+  }
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/95a81934/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/DefaultResourceCalculator.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/DefaultResourceCalculator.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/DefaultResourceCalculator.java
index bdf60bd..764deaa 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/DefaultResourceCalculator.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/DefaultResourceCalculator.java
@@ -132,4 +132,10 @@ public class DefaultResourceCalculator extends ResourceCalculator {
   public boolean isAnyMajorResourceZero(Resource resource) {
     return resource.getMemorySize() == 0f;
   }
+
+  @Override
+  public Resource normalizeDown(Resource r, Resource stepFactor) {
+    return Resources.createResource(
+        roundDown((r.getMemorySize()), stepFactor.getMemorySize()));
+  }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/95a81934/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/DominantResourceCalculator.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/DominantResourceCalculator.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/DominantResourceCalculator.java
index 7697e1d..05ddb41 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/DominantResourceCalculator.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/DominantResourceCalculator.java
@@ -244,4 +244,11 @@ public class DominantResourceCalculator extends ResourceCalculator {
   public boolean isAnyMajorResourceZero(Resource resource) {
     return resource.getMemorySize() == 0f || resource.getVirtualCores() == 0;
   }
+
+  @Override
+  public Resource normalizeDown(Resource r, Resource stepFactor) {
+    return Resources.createResource(
+        roundDown(r.getMemorySize(), stepFactor.getMemorySize()),
+        roundDown(r.getVirtualCores(), stepFactor.getVirtualCores()));
+  }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/95a81934/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceCalculator.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceCalculator.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceCalculator.java
index 398dac5..013b723 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceCalculator.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceCalculator.java
@@ -236,4 +236,16 @@ public abstract class ResourceCalculator {
    * @return returns true if any resource is zero.
    */
   public abstract boolean isAnyMajorResourceZero(Resource resource);
+
+  /**
+   * Get resource <code>r</code>and normalize down using step-factor
+   * <code>stepFactor</code>.
+   *
+   * @param r
+   *          resource to be multiplied
+   * @param stepFactor
+   *          factor by which to normalize down
+   * @return resulting normalized resource
+   */
+  public abstract Resource normalizeDown(Resource r, Resource stepFactor);
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/95a81934/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/Resources.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/Resources.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/Resources.java
index a1d14fd..3972ec2 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/Resources.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/Resources.java
@@ -355,4 +355,9 @@ public class Resources {
       Resource resource) {
     return rc.isAnyMajorResourceZero(resource);
   }
+
+  public static Resource normalizeDown(ResourceCalculator calculator,
+      Resource resource, Resource factor) {
+    return calculator.normalizeDown(resource, factor);
+  }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/95a81934/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/FifoCandidatesSelector.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/FifoCandidatesSelector.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/FifoCandidatesSelector.java
index f843db4..748548a 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/FifoCandidatesSelector.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/FifoCandidatesSelector.java
@@ -140,10 +140,10 @@ public class FifoCandidatesSelector
         // Can try preempting AMContainers (still saving atmost
         // maxAMCapacityForThisQueue AMResource's) if more resources are
         // required to be preemptionCandidates from this Queue.
-        Resource maxAMCapacityForThisQueue = Resources.multiply(
-            Resources.multiply(clusterResource,
-                leafQueue.getAbsoluteCapacity()),
-            leafQueue.getMaxAMResourcePerQueuePercent());
+        Resource maxAMCapacityForThisQueue = Resources
+            .multiply(
+                leafQueue.getEffectiveCapacity(RMNodeLabelsManager.NO_LABEL),
+                leafQueue.getMaxAMResourcePerQueuePercent());
 
         preemptAMContainers(clusterResource, selectedCandidates, skippedAMContainerlist,
             resToObtainByPartition, skippedAMSize, maxAMCapacityForThisQueue,
@@ -199,7 +199,6 @@ public class FifoCandidatesSelector
    * Given a target preemption for a specific application, select containers
    * to preempt (after unreserving all reservation for that app).
    */
-  @SuppressWarnings("unchecked")
   private void preemptFrom(FiCaSchedulerApp app,
       Resource clusterResource, Map<String, Resource> resToObtainByPartition,
       List<RMContainer> skippedAMContainerlist, Resource skippedAMSize,

http://git-wip-us.apache.org/repos/asf/hadoop/blob/95a81934/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/ProportionalCapacityPreemptionPolicy.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/ProportionalCapacityPreemptionPolicy.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/ProportionalCapacityPreemptionPolicy.java
index fc8ad2b..8b6fa3f 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/ProportionalCapacityPreemptionPolicy.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/ProportionalCapacityPreemptionPolicy.java
@@ -33,6 +33,7 @@ import org.apache.hadoop.yarn.server.resourcemanager.monitor.SchedulingEditPolic
 import org.apache.hadoop.yarn.server.resourcemanager.nodelabels.RMNodeLabelsManager;
 import org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainer;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.PreemptableResourceScheduler;
+import org.apache.hadoop.yarn.server.resourcemanager.scheduler.QueueResourceQuotas;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CSQueue;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration;
@@ -486,6 +487,13 @@ public class ProportionalCapacityPreemptionPolicy
       float absMaxCap = qc.getAbsoluteMaximumCapacity(partitionToLookAt);
       boolean preemptionDisabled = curQueue.getPreemptionDisabled();
 
+      QueueResourceQuotas queueResourceQuotas = curQueue
+          .getQueueResourceQuotas();
+      Resource effMinRes = queueResourceQuotas
+          .getEffectiveMinResource(partitionToLookAt);
+      Resource effMaxRes = queueResourceQuotas
+          .getEffectiveMaxResource(partitionToLookAt);
+
       Resource current = Resources
           .clone(curQueue.getQueueResourceUsage().getUsed(partitionToLookAt));
       Resource killable = Resources.none();
@@ -511,7 +519,7 @@ public class ProportionalCapacityPreemptionPolicy
 
       ret = new TempQueuePerPartition(queueName, current, preemptionDisabled,
           partitionToLookAt, killable, absCap, absMaxCap, partitionResource,
-          reserved, curQueue);
+          reserved, curQueue, effMinRes, effMaxRes);
 
       if (curQueue instanceof ParentQueue) {
         String configuredOrderingPolicy =

http://git-wip-us.apache.org/repos/asf/hadoop/blob/95a81934/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TempQueuePerPartition.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TempQueuePerPartition.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TempQueuePerPartition.java
index 89452f9..bd236fe 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TempQueuePerPartition.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TempQueuePerPartition.java
@@ -48,6 +48,9 @@ public class TempQueuePerPartition extends AbstractPreemptionEntity {
 
   double normalizedGuarantee;
 
+  private Resource effMinRes;
+  private Resource effMaxRes;
+
   final ArrayList<TempQueuePerPartition> children;
   private Collection<TempAppPerPartition> apps;
   LeafQueue leafQueue;
@@ -68,7 +71,8 @@ public class TempQueuePerPartition extends AbstractPreemptionEntity {
   TempQueuePerPartition(String queueName, Resource current,
       boolean preemptionDisabled, String partition, Resource killable,
       float absCapacity, float absMaxCapacity, Resource totalPartitionResource,
-      Resource reserved, CSQueue queue) {
+      Resource reserved, CSQueue queue, Resource effMinRes,
+      Resource effMaxRes) {
     super(queueName, current, Resource.newInstance(0, 0), reserved,
         Resource.newInstance(0, 0));
 
@@ -95,6 +99,8 @@ public class TempQueuePerPartition extends AbstractPreemptionEntity {
     this.absCapacity = absCapacity;
     this.absMaxCapacity = absMaxCapacity;
     this.totalPartitionResource = totalPartitionResource;
+    this.effMinRes = effMinRes;
+    this.effMaxRes = effMaxRes;
   }
 
   public void setLeafQueue(LeafQueue l) {
@@ -177,10 +183,18 @@ public class TempQueuePerPartition extends AbstractPreemptionEntity {
   }
 
   public Resource getGuaranteed() {
+    if(!effMinRes.equals(Resources.none())) {
+      return Resources.clone(effMinRes);
+    }
+
     return Resources.multiply(totalPartitionResource, absCapacity);
   }
 
   public Resource getMax() {
+    if(!effMaxRes.equals(Resources.none())) {
+      return Resources.clone(effMaxRes);
+    }
+
     return Resources.multiply(totalPartitionResource, absMaxCapacity);
   }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/95a81934/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractResourceUsage.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractResourceUsage.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractResourceUsage.java
new file mode 100644
index 0000000..c295323
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractResourceUsage.java
@@ -0,0 +1,198 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.server.resourcemanager.scheduler;
+
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.locks.ReentrantReadWriteLock;
+import java.util.concurrent.locks.ReentrantReadWriteLock.ReadLock;
+import java.util.concurrent.locks.ReentrantReadWriteLock.WriteLock;
+
+import org.apache.hadoop.yarn.api.records.Resource;
+import org.apache.hadoop.yarn.nodelabels.CommonNodeLabelsManager;
+import org.apache.hadoop.yarn.server.resourcemanager.nodelabels.RMNodeLabelsManager;
+import org.apache.hadoop.yarn.util.resource.Resources;
+
+/**
+ * This class can be used to track resource usage in queue/user/app.
+ *
+ * And it is thread-safe
+ */
+public class AbstractResourceUsage {
+  protected ReadLock readLock;
+  protected WriteLock writeLock;
+  protected Map<String, UsageByLabel> usages;
+  // short for no-label :)
+  private static final String NL = CommonNodeLabelsManager.NO_LABEL;
+
+  public AbstractResourceUsage() {
+    ReentrantReadWriteLock lock = new ReentrantReadWriteLock();
+    readLock = lock.readLock();
+    writeLock = lock.writeLock();
+
+    usages = new HashMap<String, UsageByLabel>();
+    usages.put(NL, new UsageByLabel(NL));
+  }
+
+  // Usage enum here to make implement cleaner
+  public enum ResourceType {
+    // CACHED_USED and CACHED_PENDING may be read by anyone, but must only
+    // be written by ordering policies
+    USED(0), PENDING(1), AMUSED(2), RESERVED(3), CACHED_USED(4), CACHED_PENDING(
+        5), AMLIMIT(6), MIN_RESOURCE(7), MAX_RESOURCE(8), EFF_MIN_RESOURCE(
+            9), EFF_MAX_RESOURCE(
+                10), EFF_MIN_RESOURCE_UP(11), EFF_MAX_RESOURCE_UP(12);
+
+    private int idx;
+
+    private ResourceType(int value) {
+      this.idx = value;
+    }
+  }
+
+  public static class UsageByLabel {
+    // usage by label, contains all UsageType
+    private Resource[] resArr;
+
+    public UsageByLabel(String label) {
+      resArr = new Resource[ResourceType.values().length];
+      for (int i = 0; i < resArr.length; i++) {
+        resArr[i] = Resource.newInstance(0, 0);
+      };
+    }
+
+    public Resource getUsed() {
+      return resArr[ResourceType.USED.idx];
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder();
+      sb.append("{used=" + resArr[0] + "%, ");
+      sb.append("pending=" + resArr[1] + "%, ");
+      sb.append("am_used=" + resArr[2] + "%, ");
+      sb.append("reserved=" + resArr[3] + "%}");
+      sb.append("min_eff=" + resArr[9] + "%, ");
+      sb.append("max_eff=" + resArr[10] + "%}");
+      sb.append("min_effup=" + resArr[11] + "%, ");
+      return sb.toString();
+    }
+  }
+
+  private static Resource normalize(Resource res) {
+    if (res == null) {
+      return Resources.none();
+    }
+    return res;
+  }
+
+  protected Resource _get(String label, ResourceType type) {
+    if (label == null) {
+      label = RMNodeLabelsManager.NO_LABEL;
+    }
+
+    try {
+      readLock.lock();
+      UsageByLabel usage = usages.get(label);
+      if (null == usage) {
+        return Resources.none();
+      }
+      return normalize(usage.resArr[type.idx]);
+    } finally {
+      readLock.unlock();
+    }
+  }
+
+  protected Resource _getAll(ResourceType type) {
+    try {
+      readLock.lock();
+      Resource allOfType = Resources.createResource(0);
+      for (Map.Entry<String, UsageByLabel> usageEntry : usages.entrySet()) {
+        //all usages types are initialized
+        Resources.addTo(allOfType, usageEntry.getValue().resArr[type.idx]);
+      }
+      return allOfType;
+    } finally {
+      readLock.unlock();
+    }
+  }
+
+  private UsageByLabel getAndAddIfMissing(String label) {
+    if (label == null) {
+      label = RMNodeLabelsManager.NO_LABEL;
+    }
+    if (!usages.containsKey(label)) {
+      UsageByLabel u = new UsageByLabel(label);
+      usages.put(label, u);
+      return u;
+    }
+
+    return usages.get(label);
+  }
+
+  protected void _set(String label, ResourceType type, Resource res) {
+    try {
+      writeLock.lock();
+      UsageByLabel usage = getAndAddIfMissing(label);
+      usage.resArr[type.idx] = res;
+    } finally {
+      writeLock.unlock();
+    }
+  }
+
+  protected void _inc(String label, ResourceType type, Resource res) {
+    try {
+      writeLock.lock();
+      UsageByLabel usage = getAndAddIfMissing(label);
+      Resources.addTo(usage.resArr[type.idx], res);
+    } finally {
+      writeLock.unlock();
+    }
+  }
+
+  protected void _dec(String label, ResourceType type, Resource res) {
+    try {
+      writeLock.lock();
+      UsageByLabel usage = getAndAddIfMissing(label);
+      Resources.subtractFrom(usage.resArr[type.idx], res);
+    } finally {
+      writeLock.unlock();
+    }
+  }
+
+  @Override
+  public String toString() {
+    try {
+      readLock.lock();
+      return usages.toString();
+    } finally {
+      readLock.unlock();
+    }
+  }
+
+  public Set<String> getNodePartitionsSet() {
+    try {
+      readLock.lock();
+      return usages.keySet();
+    } finally {
+      readLock.unlock();
+    }
+  }
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/95a81934/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/QueueResourceQuotas.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/QueueResourceQuotas.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/QueueResourceQuotas.java
new file mode 100644
index 0000000..2e653fc
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/QueueResourceQuotas.java
@@ -0,0 +1,153 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.server.resourcemanager.scheduler;
+
+import org.apache.hadoop.yarn.api.records.Resource;
+import org.apache.hadoop.yarn.nodelabels.CommonNodeLabelsManager;
+
+/**
+ * QueueResourceQuotas by Labels for following fields by label
+ * - EFFECTIVE_MIN_CAPACITY
+ * - EFFECTIVE_MAX_CAPACITY
+ * This class can be used to track resource usage in queue/user/app.
+ *
+ * And it is thread-safe
+ */
+public class QueueResourceQuotas extends AbstractResourceUsage {
+  // short for no-label :)
+  private static final String NL = CommonNodeLabelsManager.NO_LABEL;
+
+  public QueueResourceQuotas() {
+    super();
+  }
+
+  /*
+   * Configured Minimum Resource
+   */
+  public Resource getConfiguredMinResource() {
+    return _get(NL, ResourceType.MIN_RESOURCE);
+  }
+
+  public Resource getConfiguredMinResource(String label) {
+    return _get(label, ResourceType.MIN_RESOURCE);
+  }
+
+  public void setConfiguredMinResource(String label, Resource res) {
+    _set(label, ResourceType.MIN_RESOURCE, res);
+  }
+
+  public void setConfiguredMinResource(Resource res) {
+    _set(NL, ResourceType.MIN_RESOURCE, res);
+  }
+
+  /*
+   * Configured Maximum Resource
+   */
+  public Resource getConfiguredMaxResource() {
+    return getConfiguredMaxResource(NL);
+  }
+
+  public Resource getConfiguredMaxResource(String label) {
+    return _get(label, ResourceType.MAX_RESOURCE);
+  }
+
+  public void setConfiguredMaxResource(Resource res) {
+    setConfiguredMaxResource(NL, res);
+  }
+
+  public void setConfiguredMaxResource(String label, Resource res) {
+    _set(label, ResourceType.MAX_RESOURCE, res);
+  }
+
+  /*
+   * Effective Minimum Resource
+   */
+  public Resource getEffectiveMinResource() {
+    return _get(NL, ResourceType.EFF_MIN_RESOURCE);
+  }
+
+  public Resource getEffectiveMinResource(String label) {
+    return _get(label, ResourceType.EFF_MIN_RESOURCE);
+  }
+
+  public void setEffectiveMinResource(String label, Resource res) {
+    _set(label, ResourceType.EFF_MIN_RESOURCE, res);
+  }
+
+  public void setEffectiveMinResource(Resource res) {
+    _set(NL, ResourceType.EFF_MIN_RESOURCE, res);
+  }
+
+  /*
+   * Effective Maximum Resource
+   */
+  public Resource getEffectiveMaxResource() {
+    return getEffectiveMaxResource(NL);
+  }
+
+  public Resource getEffectiveMaxResource(String label) {
+    return _get(label, ResourceType.EFF_MAX_RESOURCE);
+  }
+
+  public void setEffectiveMaxResource(Resource res) {
+    setEffectiveMaxResource(NL, res);
+  }
+
+  public void setEffectiveMaxResource(String label, Resource res) {
+    _set(label, ResourceType.EFF_MAX_RESOURCE, res);
+  }
+
+  /*
+   * Effective Minimum Resource
+   */
+  public Resource getEffectiveMinResourceUp() {
+    return _get(NL, ResourceType.EFF_MIN_RESOURCE_UP);
+  }
+
+  public Resource getEffectiveMinResourceUp(String label) {
+    return _get(label, ResourceType.EFF_MIN_RESOURCE_UP);
+  }
+
+  public void setEffectiveMinResourceUp(String label, Resource res) {
+    _set(label, ResourceType.EFF_MIN_RESOURCE_UP, res);
+  }
+
+  public void setEffectiveMinResourceUp(Resource res) {
+    _set(NL, ResourceType.EFF_MIN_RESOURCE_UP, res);
+  }
+
+  /*
+   * Effective Maximum Resource
+   */
+  public Resource getEffectiveMaxResourceUp() {
+    return getEffectiveMaxResourceUp(NL);
+  }
+
+  public Resource getEffectiveMaxResourceUp(String label) {
+    return _get(label, ResourceType.EFF_MAX_RESOURCE_UP);
+  }
+
+  public void setEffectiveMaxResourceUp(Resource res) {
+    setEffectiveMaxResourceUp(NL, res);
+  }
+
+  public void setEffectiveMaxResourceUp(String label, Resource res) {
+    _set(label, ResourceType.EFF_MAX_RESOURCE_UP, res);
+  }
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/95a81934/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/ResourceUsage.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/ResourceUsage.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/ResourceUsage.java
index 6f0c7d2..ede4aec 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/ResourceUsage.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/ResourceUsage.java
@@ -39,63 +39,12 @@ import org.apache.hadoop.yarn.util.resource.Resources;
  * 
  * And it is thread-safe
  */
-public class ResourceUsage {
-  private ReadLock readLock;
-  private WriteLock writeLock;
-  private Map<String, UsageByLabel> usages;
+public class ResourceUsage extends AbstractResourceUsage {
   // short for no-label :)
   private static final String NL = CommonNodeLabelsManager.NO_LABEL;
-  private final UsageByLabel usageNoLabel;
 
   public ResourceUsage() {
-    ReentrantReadWriteLock lock = new ReentrantReadWriteLock();
-    readLock = lock.readLock();
-    writeLock = lock.writeLock();
-
-    usages = new HashMap<String, UsageByLabel>();
-    usageNoLabel = new UsageByLabel(NL);
-    usages.put(NL, usageNoLabel);
-  }
-
-  // Usage enum here to make implement cleaner
-  private enum ResourceType {
-    //CACHED_USED and CACHED_PENDING may be read by anyone, but must only
-    //be written by ordering policies
-    USED(0), PENDING(1), AMUSED(2), RESERVED(3), CACHED_USED(4),
-      CACHED_PENDING(5), AMLIMIT(6);
-
-    private int idx;
-
-    private ResourceType(int value) {
-      this.idx = value;
-    }
-  }
-
-  private static class UsageByLabel {
-    // usage by label, contains all UsageType
-    private Resource[] resArr;
-
-    public UsageByLabel(String label) {
-      resArr = new Resource[ResourceType.values().length];
-      for (int i = 0; i < resArr.length; i++) {
-        resArr[i] = Resource.newInstance(0, 0);
-      };
-    }
-    
-    public Resource getUsed() {
-      return resArr[ResourceType.USED.idx];
-    }
-
-    @Override
-    public String toString() {
-      StringBuilder sb = new StringBuilder();
-      sb.append("{used=" + resArr[0] + "%, ");
-      sb.append("pending=" + resArr[1] + "%, ");
-      sb.append("am_used=" + resArr[2] + "%, ");
-      sb.append("reserved=" + resArr[3] + "%}");
-      sb.append("am_limit=" + resArr[6] + "%, ");
-      return sb.toString();
-    }
+    super();
   }
 
   /*
@@ -109,22 +58,6 @@ public class ResourceUsage {
     return _get(label, ResourceType.USED);
   }
 
-  public Resource getCachedUsed() {
-    return _get(NL, ResourceType.CACHED_USED);
-  }
-
-  public Resource getCachedUsed(String label) {
-    return _get(label, ResourceType.CACHED_USED);
-  }
-
-  public Resource getCachedPending() {
-    return _get(NL, ResourceType.CACHED_PENDING);
-  }
-
-  public Resource getCachedPending(String label) {
-    return _get(label, ResourceType.CACHED_PENDING);
-  }
-
   public void incUsed(String label, Resource res) {
     _inc(label, ResourceType.USED, res);
   }
@@ -145,7 +78,7 @@ public class ResourceUsage {
     setUsed(NL, res);
   }
   
-  public void copyAllUsed(ResourceUsage other) {
+  public void copyAllUsed(AbstractResourceUsage other) {
     try {
       writeLock.lock();
       for (Entry<String, UsageByLabel> entry : other.usages.entrySet()) {
@@ -160,22 +93,6 @@ public class ResourceUsage {
     _set(label, ResourceType.USED, res);
   }
 
-  public void setCachedUsed(String label, Resource res) {
-    _set(label, ResourceType.CACHED_USED, res);
-  }
-
-  public void setCachedUsed(Resource res) {
-    _set(NL, ResourceType.CACHED_USED, res);
-  }
-
-  public void setCachedPending(String label, Resource res) {
-    _set(label, ResourceType.CACHED_PENDING, res);
-  }
-
-  public void setCachedPending(Resource res) {
-    _set(NL, ResourceType.CACHED_PENDING, res);
-  }
-
   /*
    * Pending
    */
@@ -281,6 +198,47 @@ public class ResourceUsage {
     _set(label, ResourceType.AMUSED, res);
   }
 
+  public Resource getAllPending() {
+    return _getAll(ResourceType.PENDING);
+  }
+
+  public Resource getAllUsed() {
+    return _getAll(ResourceType.USED);
+  }
+
+  // Cache Used
+  public Resource getCachedUsed() {
+    return _get(NL, ResourceType.CACHED_USED);
+  }
+
+  public Resource getCachedUsed(String label) {
+    return _get(label, ResourceType.CACHED_USED);
+  }
+
+  public Resource getCachedPending() {
+    return _get(NL, ResourceType.CACHED_PENDING);
+  }
+
+  public Resource getCachedPending(String label) {
+    return _get(label, ResourceType.CACHED_PENDING);
+  }
+
+  public void setCachedUsed(String label, Resource res) {
+    _set(label, ResourceType.CACHED_USED, res);
+  }
+
+  public void setCachedUsed(Resource res) {
+    _set(NL, ResourceType.CACHED_USED, res);
+  }
+
+  public void setCachedPending(String label, Resource res) {
+    _set(label, ResourceType.CACHED_PENDING, res);
+  }
+
+  public void setCachedPending(Resource res) {
+    _set(NL, ResourceType.CACHED_PENDING, res);
+  }
+
   /*
    * AM-Resource Limit
    */
@@ -316,94 +274,6 @@ public class ResourceUsage {
     _set(label, ResourceType.AMLIMIT, res);
   }
 
-  private static Resource normalize(Resource res) {
-    if (res == null) {
-      return Resources.none();
-    }
-    return res;
-  }
-
-  private Resource _get(String label, ResourceType type) {
-    if (label == null || label.equals(NL)) {
-      return normalize(usageNoLabel.resArr[type.idx]);
-    }
-    try {
-      readLock.lock();
-      UsageByLabel usage = usages.get(label);
-      if (null == usage) {
-        return Resources.none();
-      }
-      return normalize(usage.resArr[type.idx]);
-    } finally {
-      readLock.unlock();
-    }
-  }
-  
-  private Resource _getAll(ResourceType type) {
-    try {
-      readLock.lock();
-      Resource allOfType = Resources.createResource(0);
-      for (Map.Entry<String, UsageByLabel> usageEntry : usages.entrySet()) {
-        //all usages types are initialized
-        Resources.addTo(allOfType, usageEntry.getValue().resArr[type.idx]);
-      }
-      return allOfType;
-    } finally {
-      readLock.unlock();
-    }
-  }
-  
-  public Resource getAllPending() {
-    return _getAll(ResourceType.PENDING);
-  }
-  
-  public Resource getAllUsed() {
-    return _getAll(ResourceType.USED);
-  }
-
-  private UsageByLabel getAndAddIfMissing(String label) {
-    if (label == null || label.equals(NL)) {
-      return usageNoLabel;
-    }
-    if (!usages.containsKey(label)) {
-      UsageByLabel u = new UsageByLabel(label);
-      usages.put(label, u);
-      return u;
-    }
-
-    return usages.get(label);
-  }
-
-  private void _set(String label, ResourceType type, Resource res) {
-    try {
-      writeLock.lock();
-      UsageByLabel usage = getAndAddIfMissing(label);
-      usage.resArr[type.idx] = res;
-    } finally {
-      writeLock.unlock();
-    }
-  }
-
-  private void _inc(String label, ResourceType type, Resource res) {
-    try {
-      writeLock.lock();
-      UsageByLabel usage = getAndAddIfMissing(label);
-      Resources.addTo(usage.resArr[type.idx], res);
-    } finally {
-      writeLock.unlock();
-    }
-  }
-
-  private void _dec(String label, ResourceType type, Resource res) {
-    try {
-      writeLock.lock();
-      UsageByLabel usage = getAndAddIfMissing(label);
-      Resources.subtractFrom(usage.resArr[type.idx], res);
-    } finally {
-      writeLock.unlock();
-    }
-  }
-
   public Resource getCachedDemand(String label) {
     try {
       readLock.lock();
@@ -415,23 +285,4 @@ public class ResourceUsage {
       readLock.unlock();
     }
   }
-  
-  @Override
-  public String toString() {
-    try {
-      readLock.lock();
-      return usages.toString();
-    } finally {
-      readLock.unlock();
-    }
-  }
-  
-  public Set<String> getNodePartitionsSet() {
-    try {
-      readLock.lock();
-      return usages.keySet();
-    } finally {
-      readLock.unlock();
-    }
-  }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/95a81934/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java
index 5fbdead..39ec57a 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java
@@ -56,6 +56,8 @@ import org.apache.hadoop.yarn.security.YarnAuthorizationProvider;
 import org.apache.hadoop.yarn.server.resourcemanager.nodelabels.RMNodeLabelsManager;
 import org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainer;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.activities.ActivitiesManager;
+import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration.AbsoluteResourceType;
+import org.apache.hadoop.yarn.server.resourcemanager.scheduler.QueueResourceQuotas;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceLimits;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceUsage;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt;
@@ -86,6 +88,7 @@ public abstract class AbstractCSQueue implements CSQueue {
 
   final ResourceCalculator resourceCalculator;
   Set<String> accessibleLabels;
+  Set<String> resourceTypes;
   final RMNodeLabelsManager labelManager;
   String defaultLabelExpression;
   
@@ -101,6 +104,14 @@ public abstract class AbstractCSQueue implements CSQueue {
   // etc.
   QueueCapacities queueCapacities;
 
+  QueueResourceQuotas queueResourceQuotas;
+
+  protected enum CapacityConfigType {
+    NONE, PERCENTAGE, ABSOLUTE_RESOURCE
+  };
+  protected CapacityConfigType capacityConfigType =
+      CapacityConfigType.NONE;
+
   private final RecordFactory recordFactory = 
       RecordFactoryProvider.getRecordFactory(null);
   protected CapacitySchedulerContext csContext;
@@ -138,6 +149,9 @@ public abstract class AbstractCSQueue implements CSQueue {
     // initialize QueueCapacities
     queueCapacities = new QueueCapacities(parent == null);
 
+    // initialize queueResourceQuotas
+    queueResourceQuotas = new QueueResourceQuotas();
+
     ReentrantReadWriteLock lock = new ReentrantReadWriteLock();
     readLock = lock.readLock();
     writeLock = lock.writeLock();
@@ -268,6 +282,10 @@ public abstract class AbstractCSQueue implements CSQueue {
       this.defaultLabelExpression =
           csContext.getConfiguration().getDefaultNodeLabelExpression(
               getQueuePath());
+      this.resourceTypes = new HashSet<String>();
+      for (AbsoluteResourceType type : AbsoluteResourceType.values()) {
+        resourceTypes.add(type.toString().toLowerCase());
+      }
 
       // inherit from parent if labels not set
       if (this.accessibleLabels == null && parent != null) {
@@ -284,6 +302,11 @@ public abstract class AbstractCSQueue implements CSQueue {
       // After we setup labels, we can setup capacities
       setupConfigurableCapacities();
 
+      // Also fetch minimum/maximum resource constraint for this queue if
+      // configured.
+      capacityConfigType = CapacityConfigType.NONE;
+      updateConfigurableResourceRequirement(getQueuePath(), clusterResource);
+
       this.maximumAllocation =
           csContext.getConfiguration().getMaximumAllocationPerQueue(
               getQueuePath());
@@ -356,6 +379,125 @@ public abstract class AbstractCSQueue implements CSQueue {
     return unionInheritedWeights;
   }
 
+  protected void updateConfigurableResourceRequirement(String queuePath,
+      Resource clusterResource) {
+    CapacitySchedulerConfiguration conf = csContext.getConfiguration();
+    Set<String> configuredNodelabels = conf.getConfiguredNodeLabels(queuePath);
+
+    for (String label : configuredNodelabels) {
+      Resource minResource = conf.getMinimumResourceRequirement(label,
+          queuePath, resourceTypes);
+      Resource maxResource = conf.getMaximumResourceRequirement(label,
+          queuePath, resourceTypes);
+
+      if (this.capacityConfigType.equals(CapacityConfigType.NONE)) {
+        this.capacityConfigType = (!minResource.equals(Resources.none())
+            && queueCapacities.getAbsoluteCapacity(label) == 0f)
+                ? CapacityConfigType.ABSOLUTE_RESOURCE
+                : CapacityConfigType.PERCENTAGE;
+        if (LOG.isDebugEnabled()) {
+          LOG.debug("capacityConfigType is updated as '" + capacityConfigType
+              + "' for queue '" + getQueueName());
+        }
+      }
+
+      validateAbsoluteVsPercentageCapacityConfig(minResource);
+
+      // If min resource for a resource type is greater than its max resource,
+      // throw exception to handle such error configs.
+      if (!maxResource.equals(Resources.none()) && Resources.greaterThan(
+          resourceCalculator, clusterResource, minResource, maxResource)) {
+        throw new IllegalArgumentException("Min resource configuration "
+            + minResource + " is greater than its max value:" + maxResource
+            + " in queue:" + getQueueName());
+      }
+
+      // If parent's max resource is lesser to a specific child's max
+      // resource, throw exception to handle such error configs.
+      if (parent != null) {
+        Resource parentMaxRes = parent.getQueueResourceQuotas()
+            .getConfiguredMaxResource(label);
+        if (Resources.greaterThan(resourceCalculator, clusterResource,
+            parentMaxRes, Resources.none())) {
+          if (Resources.greaterThan(resourceCalculator, clusterResource,
+              maxResource, parentMaxRes)) {
+            throw new IllegalArgumentException("Max resource configuration "
+                + maxResource + " is greater than parents max value:"
+                + parentMaxRes + " in queue:" + getQueueName());
+          }
+
+          // If child's max resource is not set, but its parent max resource is
+          // set, we must set child max resource to its parent's.
+          if (maxResource.equals(Resources.none())
+              && !minResource.equals(Resources.none())) {
+            maxResource = Resources.clone(parentMaxRes);
+          }
+        }
+      }
+
+      if (LOG.isDebugEnabled()) {
+        LOG.debug("Updating absolute resource configuration for queue:"
+            + getQueueName() + " as minResource=" + minResource
+            + " and maxResource=" + maxResource);
+      }
+
+      queueResourceQuotas.setConfiguredMinResource(label, minResource);
+      queueResourceQuotas.setConfiguredMaxResource(label, maxResource);
+    }
+  }
+
+  private void validateAbsoluteVsPercentageCapacityConfig(
+      Resource minResource) {
+    CapacityConfigType localType = CapacityConfigType.PERCENTAGE;
+    if (!minResource.equals(Resources.none())) {
+      localType = CapacityConfigType.ABSOLUTE_RESOURCE;
+    }
+
+    if (!queueName.equals("root")
+        && !this.capacityConfigType.equals(localType)) {
+      throw new IllegalArgumentException("Queue '" + getQueueName()
+          + "' should use either percentage based capacity"
+          + " configuration or absolute resource.");
+    }
+  }
+
+  @Override
+  public CapacityConfigType getCapacityConfigType() {
+    return capacityConfigType;
+  }
+
+  @Override
+  public Resource getEffectiveCapacity(String label) {
+    return Resources
+        .clone(getQueueResourceQuotas().getEffectiveMinResource(label));
+  }
+
+  @Override
+  public Resource getEffectiveCapacityUp(String label) {
+    return Resources
+        .clone(getQueueResourceQuotas().getEffectiveMinResourceUp(label));
+  }
+
+  @Override
+  public Resource getEffectiveCapacityDown(String label, Resource factor) {
+    return Resources.normalizeDown(resourceCalculator,
+        getQueueResourceQuotas().getEffectiveMinResource(label),
+        minimumAllocation);
+  }
+
+  @Override
+  public Resource getEffectiveMaxCapacity(String label) {
+    return Resources
+        .clone(getQueueResourceQuotas().getEffectiveMaxResource(label));
+  }
+
+  @Override
+  public Resource getEffectiveMaxCapacityDown(String label, Resource factor) {
+    return Resources.normalizeDown(resourceCalculator,
+        getQueueResourceQuotas().getEffectiveMaxResource(label),
+        minimumAllocation);
+  }
+
   private void initializeQueueState(QueueState previousState,
       QueueState configuredState, QueueState parentState) {
     // verify that we can not any value for State other than RUNNING/STOPPED
@@ -547,6 +689,11 @@ public abstract class AbstractCSQueue implements CSQueue {
   }
 
   @Override
+  public QueueResourceQuotas getQueueResourceQuotas() {
+    return queueResourceQuotas;
+  }
+
+  @Override
   public ReentrantReadWriteLock.ReadLock getReadLock() {
     return readLock;
   }
@@ -596,7 +743,7 @@ public abstract class AbstractCSQueue implements CSQueue {
        * limit-set-by-parent)
        */
       Resource queueMaxResource =
-          getQueueMaxResource(nodePartition, clusterResource);
+          getQueueMaxResource(nodePartition);
 
       return Resources.min(resourceCalculator, clusterResource,
           queueMaxResource, currentResourceLimits.getLimit());
@@ -609,11 +756,8 @@ public abstract class AbstractCSQueue implements CSQueue {
     return Resources.none();
   }
 
-  Resource getQueueMaxResource(String nodePartition, Resource clusterResource) {
-    return Resources.multiplyAndNormalizeDown(resourceCalculator,
-        labelManager.getResourceByLabel(nodePartition, clusterResource),
-        queueCapacities.getAbsoluteMaximumCapacity(nodePartition),
-        minimumAllocation);
+  Resource getQueueMaxResource(String nodePartition) {
+    return getEffectiveMaxCapacity(nodePartition);
   }
 
   public boolean hasChildQueues() {
@@ -774,7 +918,7 @@ public abstract class AbstractCSQueue implements CSQueue {
     queueUsage.incUsed(nodeLabel, resourceToInc);
     CSQueueUtils.updateUsedCapacity(resourceCalculator,
         labelManager.getResourceByLabel(nodeLabel, Resources.none()),
-        nodeLabel, this);
+        Resources.none(), nodeLabel, this);
     if (null != parent) {
       parent.incUsedResource(nodeLabel, resourceToInc, null);
     }
@@ -790,7 +934,7 @@ public abstract class AbstractCSQueue implements CSQueue {
     queueUsage.decUsed(nodeLabel, resourceToDec);
     CSQueueUtils.updateUsedCapacity(resourceCalculator,
         labelManager.getResourceByLabel(nodeLabel, Resources.none()),
-        nodeLabel, this);
+        Resources.none(), nodeLabel, this);
     if (null != parent) {
       parent.decUsedResource(nodeLabel, resourceToDec, null);
     }
@@ -896,7 +1040,7 @@ public abstract class AbstractCSQueue implements CSQueue {
         Resource maxResourceLimit;
         if (allocation.getSchedulingMode()
             == SchedulingMode.RESPECT_PARTITION_EXCLUSIVITY) {
-          maxResourceLimit = getQueueMaxResource(partition, cluster);
+          maxResourceLimit = getQueueMaxResource(partition);
         } else{
           maxResourceLimit = labelManager.getResourceByLabel(
               schedulerContainer.getNodePartition(), cluster);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/95a81934/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CSQueue.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CSQueue.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CSQueue.java
index 3a17d1b..a93d74e 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CSQueue.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CSQueue.java
@@ -37,21 +37,20 @@ import org.apache.hadoop.yarn.api.records.Priority;
 import org.apache.hadoop.yarn.api.records.QueueACL;
 import org.apache.hadoop.yarn.api.records.QueueState;
 import org.apache.hadoop.yarn.api.records.Resource;
-import org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException;
 import org.apache.hadoop.yarn.security.PrivilegedEntity;
 import org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainer;
 import org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerEventType;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.AbstractUsersManager;
+import org.apache.hadoop.yarn.server.resourcemanager.scheduler.QueueResourceQuotas;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceLimits;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceUsage;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerQueue;
-import org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedContainerChangeRequest;
+import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.AbstractCSQueue.CapacityConfigType;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.ResourceCommitRequest;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.placement.PlacementSet;
-import org.apache.hadoop.yarn.server.resourcemanager.scheduler.placement.SimplePlacementSet;
 
 /**
  * <code>CSQueue</code> represents a node in the tree of 
@@ -357,4 +356,41 @@ public interface CSQueue extends SchedulerQueue<CSQueue> {
    * @return map of usernames and corresponding weight
    */
   Map<String, Float> getUserWeights();
+
+  /**
+   * Get QueueResourceQuotas associated with each queue.
+   * @return QueueResourceQuotas
+   */
+  public QueueResourceQuotas getQueueResourceQuotas();
+
+  /**
+   * Get CapacityConfigType as PERCENTAGE or ABSOLUTE_RESOURCE
+   * @return CapacityConfigType
+   */
+  public CapacityConfigType getCapacityConfigType();
+
+  /**
+   * Get effective capacity of queue. If min/max resource is configured,
+   * preference will be given to absolute configuration over normal capacity.
+   * Also round down the result to normalizeDown.
+   *
+   * @param label
+   *          partition
+   * @return effective queue capacity
+   */
+  Resource getEffectiveCapacity(String label);
+  Resource getEffectiveCapacityUp(String label);
+  Resource getEffectiveCapacityDown(String label, Resource factor);
+
+  /**
+   * Get effective max capacity of queue. If min/max resource is configured,
+   * preference will be given to absolute configuration over normal capacity.
+   * Also round down the result to normalizeDown.
+   *
+   * @param label
+   *          partition
+   * @return effective max queue capacity
+   */
+  Resource getEffectiveMaxCapacity(String label);
+  Resource getEffectiveMaxCapacityDown(String label, Resource factor);
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/95a81934/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CSQueueUtils.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CSQueueUtils.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CSQueueUtils.java
index e1014c1..81dec80 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CSQueueUtils.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CSQueueUtils.java
@@ -150,7 +150,7 @@ class CSQueueUtils {
       }
     }
   }
-  
+
   // Set absolute capacities for {capacity, maximum-capacity}
   private static void updateAbsoluteCapacitiesByNodeLabels(
       QueueCapacities queueCapacities, QueueCapacities parentQueueCapacities) {
@@ -180,8 +180,8 @@ class CSQueueUtils {
    * used resource for all partitions of this queue.
    */
   public static void updateUsedCapacity(final ResourceCalculator rc,
-      final Resource totalPartitionResource, String nodePartition,
-      AbstractCSQueue childQueue) {
+      final Resource totalPartitionResource, Resource clusterResource,
+      String nodePartition, AbstractCSQueue childQueue) {
     QueueCapacities queueCapacities = childQueue.getQueueCapacities();
     CSQueueMetrics queueMetrics = childQueue.getMetrics();
     ResourceUsage queueResourceUsage = childQueue.getQueueResourceUsage();
@@ -193,11 +193,8 @@ class CSQueueUtils {
 
     if (Resources.greaterThan(rc, totalPartitionResource,
         totalPartitionResource, Resources.none())) {
-      // queueGuaranteed = totalPartitionedResource *
-      // absolute_capacity(partition)
-      Resource queueGuranteedResource =
-          Resources.multiply(totalPartitionResource,
-              queueCapacities.getAbsoluteCapacity(nodePartition));
+      Resource queueGuranteedResource = childQueue
+          .getEffectiveCapacity(nodePartition);
 
       // make queueGuranteed >= minimum_allocation to avoid divided by 0.
       queueGuranteedResource =
@@ -248,9 +245,7 @@ class CSQueueUtils {
     for (String partition : nodeLabels) {
       // Calculate guaranteed resource for a label in a queue by below logic.
       // (total label resource) * (absolute capacity of label in that queue)
-      Resource queueGuranteedResource = Resources.multiply(nlm
-          .getResourceByLabel(partition, cluster), queue.getQueueCapacities()
-          .getAbsoluteCapacity(partition));
+      Resource queueGuranteedResource = queue.getEffectiveCapacity(partition);
 
       // Available resource in queue for a specific label will be calculated as
       // {(guaranteed resource for a label in a queue) -
@@ -289,15 +284,14 @@ class CSQueueUtils {
     ResourceUsage queueResourceUsage = childQueue.getQueueResourceUsage();
 
     if (nodePartition == null) {
-      for (String partition : Sets.union(
-          queueCapacities.getNodePartitionsSet(),
+      for (String partition : Sets.union(queueCapacities.getNodePartitionsSet(),
           queueResourceUsage.getNodePartitionsSet())) {
         updateUsedCapacity(rc, nlm.getResourceByLabel(partition, cluster),
-            partition, childQueue);
+            cluster, partition, childQueue);
       }
     } else {
       updateUsedCapacity(rc, nlm.getResourceByLabel(nodePartition, cluster),
-          nodePartition, childQueue);
+          cluster, nodePartition, childQueue);
     }
 
     // Update queue metrics w.r.t node labels. In a generic way, we can

http://git-wip-us.apache.org/repos/asf/hadoop/blob/95a81934/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java
index 13b9ff6..8cb01ab 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java
@@ -47,6 +47,7 @@ import org.apache.hadoop.yarn.server.resourcemanager.scheduler.policy.FairOrderi
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.policy.FifoOrderingPolicy;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.policy.OrderingPolicy;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.policy.SchedulableEntity;
+import org.apache.hadoop.yarn.util.UnitsConversionUtil;
 import org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator;
 import org.apache.hadoop.yarn.util.resource.ResourceCalculator;
 import org.apache.hadoop.yarn.util.resource.Resources;
@@ -60,6 +61,8 @@ import java.util.Iterator;
 import java.util.List;
 import java.util.Map;
 import java.util.Map.Entry;
+import java.util.regex.Matcher;
+import java.util.regex.Pattern;
 import java.util.Set;
 import java.util.StringTokenizer;
 
@@ -316,6 +319,21 @@ public class CapacitySchedulerConfiguration extends ReservationSchedulerConfigur
   @Private
   public static final int DEFAULT_MAX_ASSIGN_PER_HEARTBEAT = -1;
 
+  /** Configuring absolute min/max resources in a queue **/
+  @Private
+  public static final String MINIMUM_RESOURCE = "min-resource";
+
+  @Private
+  public static final String MAXIMUM_RESOURCE = "max-resource";
+
+  public static final String DEFAULT_RESOURCE_TYPES = "memory,vcores";
+
+  public static final String PATTERN_FOR_ABSOLUTE_RESOURCE = "\\[([^\\]]+)";
+
+  public enum AbsoluteResourceType {
+    MEMORY, VCORES;
+  }
+
   AppPriorityACLConfigurationParser priorityACLConfig = new AppPriorityACLConfigurationParser();
 
   public CapacitySchedulerConfiguration() {
@@ -393,7 +411,7 @@ public class CapacitySchedulerConfiguration extends ReservationSchedulerConfigur
   
   public float getNonLabeledQueueCapacity(String queue) {
     float capacity = queue.equals("root") ? 100.0f : getFloat(
-        getQueuePrefix(queue) + CAPACITY, UNDEFINED);
+        getQueuePrefix(queue) + CAPACITY, 0f);
     if (capacity < MINIMUM_CAPACITY_VALUE || capacity > MAXIMUM_CAPACITY_VALUE) {
       throw new IllegalArgumentException("Illegal " +
       		"capacity of " + capacity + " for queue " + queue);
@@ -1496,4 +1514,163 @@ public class CapacitySchedulerConfiguration extends ReservationSchedulerConfigur
   public int getMaxAssignPerHeartbeat() {
     return getInt(MAX_ASSIGN_PER_HEARTBEAT, DEFAULT_MAX_ASSIGN_PER_HEARTBEAT);
   }
+
+  public static String getUnits(String resourceValue) {
+    String units;
+    for (int i = 0; i < resourceValue.length(); i++) {
+      if (Character.isAlphabetic(resourceValue.charAt(i))) {
+        units = resourceValue.substring(i);
+        if (StringUtils.isAlpha(units)) {
+          return units;
+        }
+      }
+    }
+    return "";
+  }
+
+  /**
+   * Get absolute minimum resource requirement for a queue.
+   *
+   * @param label
+   *          NodeLabel
+   * @param queue
+   *          queue path
+   * @param resourceTypes
+   *          Resource types
+   * @return ResourceInformation
+   */
+  public Resource getMinimumResourceRequirement(String label, String queue,
+      Set<String> resourceTypes) {
+    return internalGetLabeledResourceRequirementForQueue(queue, label,
+        resourceTypes, MINIMUM_RESOURCE);
+  }
+
+  /**
+   * Get absolute maximum resource requirement for a queue.
+   *
+   * @param label
+   *          NodeLabel
+   * @param queue
+   *          queue path
+   * @param resourceTypes
+   *          Resource types
+   * @return Resource
+   */
+  public Resource getMaximumResourceRequirement(String label, String queue,
+      Set<String> resourceTypes) {
+    return internalGetLabeledResourceRequirementForQueue(queue, label,
+        resourceTypes, MAXIMUM_RESOURCE);
+  }
+
+  @VisibleForTesting
+  public void setMinimumResourceRequirement(String label, String queue,
+      Resource resource) {
+    updateMinMaxResourceToConf(label, queue, resource, MINIMUM_RESOURCE);
+  }
+
+  @VisibleForTesting
+  public void setMaximumResourceRequirement(String label, String queue,
+      Resource resource) {
+    updateMinMaxResourceToConf(label, queue, resource, MAXIMUM_RESOURCE);
+  }
+
+  private void updateMinMaxResourceToConf(String label, String queue,
+      Resource resource, String type) {
+    if (queue.equals("root")) {
+      throw new IllegalArgumentException(
+          "Cannot set resource, root queue will take 100% of cluster capacity");
+    }
+
+    StringBuilder resourceString = new StringBuilder();
+    resourceString
+        .append("[" + AbsoluteResourceType.MEMORY.toString().toLowerCase() + "="
+            + resource.getMemorySize() + ","
+            + AbsoluteResourceType.VCORES.toString().toLowerCase() + "="
+            + resource.getVirtualCores() + "]");
+
+    String prefix = getQueuePrefix(queue) + type;
+    if (!label.isEmpty()) {
+      prefix = getQueuePrefix(queue) + ACCESSIBLE_NODE_LABELS + DOT + label
+          + DOT + type;
+    }
+    set(prefix, resourceString.toString());
+  }
+
+  private Resource internalGetLabeledResourceRequirementForQueue(String queue,
+      String label, Set<String> resourceTypes, String suffix) {
+    String propertyName = getNodeLabelPrefix(queue, label) + suffix;
+    String resourceString = get(propertyName);
+    if (resourceString == null || resourceString.isEmpty()) {
+      return Resources.none();
+    }
+
+    // Define resource here.
+    Resource resource = Resource.newInstance(0l, 0);
+    Matcher matcher = Pattern.compile(PATTERN_FOR_ABSOLUTE_RESOURCE)
+        .matcher(resourceString);
+    /*
+     * Absolute resource configuration for a queue will be grouped by "[]".
+     * Syntax of absolute resource config could be like below
+     * "memory=4Gi vcores=2". Ideally this means "4GB of memory and 2 vcores".
+     */
+    if (matcher.find()) {
+      // Get the sub-group.
+      String subGroup = matcher.group(1);
+      if (subGroup.trim().isEmpty()) {
+        return Resources.none();
+      }
+
+      for (String kvPair : subGroup.trim().split(",")) {
+        String[] splits = kvPair.split("=");
+
+        // Ensure that each sub string is key value pair separated by '='.
+        if (splits != null && splits.length > 1) {
+          updateResourceValuesFromConfig(resourceTypes, resource, splits);
+        }
+      }
+    }
+
+    // Memory has to be configured always.
+    if (resource.getMemorySize() == 0l) {
+      return Resources.none();
+    }
+
+    if (LOG.isDebugEnabled()) {
+      LOG.debug("CSConf - getAbsolueResourcePerQueue: prefix="
+          + getNodeLabelPrefix(queue, label) + ", capacity=" + resource);
+    }
+    return resource;
+  }
+
+  private void updateResourceValuesFromConfig(Set<String> resourceTypes,
+      Resource resource, String[] splits) {
+
+    // If key is not a valid type, skip it.
+    if (!resourceTypes.contains(splits[0])) {
+      return;
+    }
+
+    String units = getUnits(splits[1]);
+    Long resourceValue = Long
+        .valueOf(splits[1].substring(0, splits[1].length() - units.length()));
+
+    // Convert all incoming units to MB if units is configured.
+    if (!units.isEmpty()) {
+      resourceValue = UnitsConversionUtil.convert(units, "Mi", resourceValue);
+    }
+
+    // map it based on key.
+    AbsoluteResourceType resType = AbsoluteResourceType
+        .valueOf(StringUtils.toUpperCase(splits[0].trim()));
+    switch (resType) {
+      case MEMORY :
+        resource.setMemorySize(resourceValue);
+        break;
+      case VCORES :
+        resource.setVirtualCores(resourceValue.intValue());
+        break;
+      default :
+        break;
+    }
+  }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/95a81934/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java
index 2e502b7..23d5088 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java
@@ -656,12 +656,7 @@ public class LeafQueue extends AbstractCSQueue {
           1.0f / Math.max(getAbstractUsersManager().getNumActiveUsers(), 1));
       effectiveUserLimit = Math.min(effectiveUserLimit * userWeight, 1.0f);
 
-      Resource queuePartitionResource = Resources
-          .multiplyAndNormalizeUp(resourceCalculator,
-              labelManager.getResourceByLabel(nodePartition,
-                  lastClusterResource),
-              queueCapacities.getAbsoluteCapacity(nodePartition),
-              minimumAllocation);
+      Resource queuePartitionResource = getEffectiveCapacityUp(nodePartition);
 
       Resource userAMLimit = Resources.multiplyAndNormalizeUp(
           resourceCalculator, queuePartitionResource,
@@ -690,11 +685,7 @@ public class LeafQueue extends AbstractCSQueue {
        * non-labeled), * with per-partition am-resource-percent to get the max am
        * resource limit for this queue and partition.
        */
-      Resource queuePartitionResource = Resources.multiplyAndNormalizeUp(
-          resourceCalculator,
-          labelManager.getResourceByLabel(nodePartition, lastClusterResource),
-          queueCapacities.getAbsoluteCapacity(nodePartition),
-          minimumAllocation);
+      Resource queuePartitionResource = getEffectiveCapacityUp(nodePartition);
 
       Resource queueCurrentLimit = Resources.none();
       // For non-labeled partition, we need to consider the current queue
@@ -950,6 +941,14 @@ public class LeafQueue extends AbstractCSQueue {
   private void setPreemptionAllowed(ResourceLimits limits, String nodePartition) {
     // Set preemption-allowed:
     // For leaf queue, only under-utilized queue is allowed to preempt resources from other queues
+    if (!queueResourceQuotas.getEffectiveMinResource(nodePartition)
+        .equals(Resources.none())) {
+      limits.setIsAllowPreemption(Resources.lessThan(resourceCalculator,
+          csContext.getClusterResource(), queueUsage.getUsed(nodePartition),
+          queueResourceQuotas.getEffectiveMinResource(nodePartition)));
+      return;
+    }
+
     float usedCapacity = queueCapacities.getAbsoluteUsedCapacity(nodePartition);
     float guaranteedCapacity = queueCapacities.getAbsoluteCapacity(nodePartition);
     limits.setIsAllowPreemption(usedCapacity < guaranteedCapacity);
@@ -1326,7 +1325,7 @@ public class LeafQueue extends AbstractCSQueue {
     currentPartitionResourceLimit =
         partition.equals(RMNodeLabelsManager.NO_LABEL)
             ? currentPartitionResourceLimit
-            : getQueueMaxResource(partition, clusterResource);
+            : getQueueMaxResource(partition);
 
     Resource headroom = Resources.componentwiseMin(
         Resources.subtract(userLimitResource, user.getUsed(partition)),
@@ -1698,12 +1697,8 @@ public class LeafQueue extends AbstractCSQueue {
     // this. So need cap limits by queue's max capacity here.
     this.cachedResourceLimitsForHeadroom =
         new ResourceLimits(currentResourceLimits.getLimit());
-    Resource queueMaxResource =
-        Resources.multiplyAndNormalizeDown(resourceCalculator, labelManager
-            .getResourceByLabel(RMNodeLabelsManager.NO_LABEL, clusterResource),
-            queueCapacities
-                .getAbsoluteMaximumCapacity(RMNodeLabelsManager.NO_LABEL),
-            minimumAllocation);
+    Resource queueMaxResource = getEffectiveMaxCapacityDown(
+        RMNodeLabelsManager.NO_LABEL, minimumAllocation);
     this.cachedResourceLimitsForHeadroom.setLimit(Resources.min(
         resourceCalculator, clusterResource, queueMaxResource,
         currentResourceLimits.getLimit()));


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[31/50] [abbrv] hadoop git commit: HDFS-12182. BlockManager.metaSave does not distinguish between "under replicated" and "missing" blocks. Contributed by Wellington Chevreuil.

Posted by wa...@apache.org.
HDFS-12182. BlockManager.metaSave does not distinguish between "under replicated" and "missing" blocks. Contributed by Wellington Chevreuil.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/9a3c2379
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/9a3c2379
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/9a3c2379

Branch: refs/heads/YARN-5881
Commit: 9a3c2379ef24cdca5153abf4b63fde1131ff8989
Parents: 07694fc
Author: Wei-Chiu Chuang <we...@apache.org>
Authored: Tue Aug 8 23:43:24 2017 -0700
Committer: Wei-Chiu Chuang <we...@apache.org>
Committed: Tue Aug 8 23:44:18 2017 -0700

----------------------------------------------------------------------
 .../server/blockmanagement/BlockManager.java    | 27 ++++++++--
 .../blockmanagement/TestBlockManager.java       | 54 ++++++++++++++++++++
 .../hdfs/server/namenode/TestMetaSave.java      |  2 +
 3 files changed, 79 insertions(+), 4 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/9a3c2379/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index fc754a0..6129db8 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -705,17 +705,36 @@ public class BlockManager implements BlockStatsMXBean {
     datanodeManager.fetchDatanodes(live, dead, false);
     out.println("Live Datanodes: " + live.size());
     out.println("Dead Datanodes: " + dead.size());
+
     //
-    // Dump contents of neededReconstruction
+    // Need to iterate over all queues from neededReplications
+    // except for the QUEUE_WITH_CORRUPT_BLOCKS)
     //
     synchronized (neededReconstruction) {
       out.println("Metasave: Blocks waiting for reconstruction: "
-          + neededReconstruction.size());
-      for (Block block : neededReconstruction) {
+          + neededReconstruction.getLowRedundancyBlockCount());
+      for (int i = 0; i < neededReconstruction.LEVEL; i++) {
+        if (i != neededReconstruction.QUEUE_WITH_CORRUPT_BLOCKS) {
+          for (Iterator<BlockInfo> it = neededReconstruction.iterator(i);
+               it.hasNext();) {
+            Block block = it.next();
+            dumpBlockMeta(block, out);
+          }
+        }
+      }
+      //
+      // Now prints corrupt blocks separately
+      //
+      out.println("Metasave: Blocks currently missing: " +
+          neededReconstruction.getCorruptBlockSize());
+      for (Iterator<BlockInfo> it = neededReconstruction.
+          iterator(neededReconstruction.QUEUE_WITH_CORRUPT_BLOCKS);
+           it.hasNext();) {
+        Block block = it.next();
         dumpBlockMeta(block, out);
       }
     }
-    
+
     // Dump any postponed over-replicated blocks
     out.println("Mis-replicated blocks that have been postponed:");
     for (Block block : postponedMisreplicatedBlocks) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9a3c2379/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
index 6b1a979..42aeadf 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
@@ -1459,4 +1459,58 @@ public class TestBlockManager {
     }
   }
 
+  @Test
+  public void testMetaSaveMissingReplicas() throws Exception {
+    List<DatanodeStorageInfo> origStorages = getStorages(0, 1);
+    List<DatanodeDescriptor> origNodes = getNodes(origStorages);
+    BlockInfo block = makeBlockReplicasMissing(0, origNodes);
+    File file = new File("test.log");
+    PrintWriter out = new PrintWriter(file);
+    bm.metaSave(out);
+    out.flush();
+    FileInputStream fstream = new FileInputStream(file);
+    DataInputStream in = new DataInputStream(fstream);
+    BufferedReader reader = new BufferedReader(new InputStreamReader(in));
+    StringBuffer buffer = new StringBuffer();
+    String line;
+    try {
+      while ((line = reader.readLine()) != null) {
+        buffer.append(line);
+      }
+      String output = buffer.toString();
+      assertTrue("Metasave output should have reported missing blocks.",
+          output.contains("Metasave: Blocks currently missing: 1"));
+      assertTrue("There should be 0 blocks waiting for reconstruction",
+          output.contains("Metasave: Blocks waiting for reconstruction: 0"));
+      String blockNameGS = block.getBlockName() + "_" +
+          block.getGenerationStamp();
+      assertTrue("Block " + blockNameGS + " should be MISSING.",
+          output.contains(blockNameGS + " MISSING"));
+    } finally {
+      reader.close();
+      file.delete();
+    }
+  }
+
+  private BlockInfo makeBlockReplicasMissing(long blockId,
+      List<DatanodeDescriptor> nodesList) throws IOException {
+    long inodeId = ++mockINodeId;
+    final INodeFile bc = TestINodeFile.createINodeFile(inodeId);
+
+    BlockInfo blockInfo = blockOnNodes(blockId, nodesList);
+    blockInfo.setReplication((short) 3);
+    blockInfo.setBlockCollectionId(inodeId);
+
+    Mockito.doReturn(bc).when(fsn).getBlockCollection(inodeId);
+    bm.blocksMap.addBlockCollection(blockInfo, bc);
+    bm.markBlockReplicasAsCorrupt(blockInfo, blockInfo,
+        blockInfo.getGenerationStamp() + 1,
+        blockInfo.getNumBytes(),
+        new DatanodeStorageInfo[]{});
+    BlockCollection mockedBc = Mockito.mock(BlockCollection.class);
+    Mockito.when(mockedBc.getBlocks()).thenReturn(new BlockInfo[]{blockInfo});
+    bm.checkRedundancy(mockedBc);
+    return blockInfo;
+  }
+
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9a3c2379/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestMetaSave.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestMetaSave.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestMetaSave.java
index 0303a5d..8cc1433 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestMetaSave.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestMetaSave.java
@@ -155,6 +155,8 @@ public class TestMetaSave {
       line = reader.readLine();
       assertTrue(line.equals("Metasave: Blocks waiting for reconstruction: 0"));
       line = reader.readLine();
+      assertTrue(line.equals("Metasave: Blocks currently missing: 0"));
+      line = reader.readLine();
       assertTrue(line.equals("Mis-replicated blocks that have been postponed:"));
       line = reader.readLine();
       assertTrue(line.equals("Metasave: Blocks being reconstructed: 0"));


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[19/50] [abbrv] hadoop git commit: YARN-6757. Refactor the usage of yarn.nodemanager.linux-container-executor.cgroups.mount-path (Contributed by Miklos Szegedi via Daniel Templeton)

Posted by wa...@apache.org.
YARN-6757. Refactor the usage of yarn.nodemanager.linux-container-executor.cgroups.mount-path
(Contributed by Miklos Szegedi via Daniel Templeton)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/47b145b9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/47b145b9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/47b145b9

Branch: refs/heads/YARN-5881
Commit: 47b145b9b4e81d781891abce8a6638f0b436acc4
Parents: 9891295
Author: Daniel Templeton <te...@apache.org>
Authored: Tue Aug 8 10:33:26 2017 -0700
Committer: Daniel Templeton <te...@apache.org>
Committed: Tue Aug 8 10:33:26 2017 -0700

----------------------------------------------------------------------
 .../src/main/resources/yarn-default.xml         | 43 ++++++++++-----
 .../linux/resources/CGroupsHandler.java         | 15 +++++
 .../linux/resources/CGroupsHandlerImpl.java     | 26 +++++----
 .../linux/resources/ResourceHandlerModule.java  | 58 ++++++++++++++++++--
 .../util/CgroupsLCEResourcesHandler.java        | 53 ++++++++++++------
 .../linux/resources/TestCGroupsHandlerImpl.java | 27 ++++++++-
 .../util/TestCgroupsLCEResourcesHandler.java    | 31 +++++++++++
 .../src/site/markdown/GracefulDecommission.md   | 12 ++--
 .../src/site/markdown/NodeManagerCgroups.md     | 17 +++++-
 .../site/markdown/WritingYarnApplications.md    |  4 +-
 .../src/site/markdown/registry/yarn-registry.md | 14 ++---
 11 files changed, 237 insertions(+), 63 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/47b145b9/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
index 95b8a88..000e892 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
@@ -134,7 +134,7 @@
 
   <property>
       <description>
-        This configures the HTTP endpoint for Yarn Daemons.The following
+        This configures the HTTP endpoint for YARN Daemons.The following
         values are supported:
         - HTTP_ONLY : Service is provided only on http
         - HTTPS_ONLY : Service is provided only on https
@@ -1063,14 +1063,14 @@
       DeletionService will delete the application's localized file directory
       and log directory.
       
-      To diagnose Yarn application problems, set this property's value large
+      To diagnose YARN application problems, set this property's value large
       enough (for example, to 600 = 10 minutes) to permit examination of these
       directories. After changing the property's value, you must restart the 
       nodemanager in order for it to have an effect.
 
-      The roots of Yarn applications' work directories is configurable with
+      The roots of YARN applications' work directories is configurable with
       the yarn.nodemanager.local-dirs property (see below), and the roots
-      of the Yarn applications' log directories is configurable with the 
+      of the YARN applications' log directories is configurable with the
       yarn.nodemanager.log-dirs property (see also below).
     </description>
     <name>yarn.nodemanager.delete.debug-delay-sec</name>
@@ -1510,28 +1510,45 @@
   <property>
     <description>The cgroups hierarchy under which to place YARN proccesses (cannot contain commas).
     If yarn.nodemanager.linux-container-executor.cgroups.mount is false
-    (that is, if cgroups have been pre-configured) and the Yarn user has write
+    (that is, if cgroups have been pre-configured) and the YARN user has write
     access to the parent directory, then the directory will be created.
-    If the directory already exists, the administrator has to give Yarn
+    If the directory already exists, the administrator has to give YARN
     write permissions to it recursively.
-    Only used when the LCE resources handler is set to the CgroupsLCEResourcesHandler.</description>
+    This property only applies when the LCE resources handler is set to
+    CgroupsLCEResourcesHandler.</description>
     <name>yarn.nodemanager.linux-container-executor.cgroups.hierarchy</name>
     <value>/hadoop-yarn</value>
   </property>
 
   <property>
     <description>Whether the LCE should attempt to mount cgroups if not found.
-    Only used when the LCE resources handler is set to the CgroupsLCEResourcesHandler.</description>
+    This property only applies when the LCE resources handler is set to
+    CgroupsLCEResourcesHandler.
+    </description>
     <name>yarn.nodemanager.linux-container-executor.cgroups.mount</name>
     <value>false</value>
   </property>
 
   <property>
-    <description>Where the LCE should attempt to mount cgroups if not found. Common locations
-    include /sys/fs/cgroup and /cgroup; the default location can vary depending on the Linux
-    distribution in use. This path must exist before the NodeManager is launched.
-    Only used when the LCE resources handler is set to the CgroupsLCEResourcesHandler, and
-    yarn.nodemanager.linux-container-executor.cgroups.mount is true.</description>
+    <description>This property sets the path from which YARN will read the
+    CGroups configuration. YARN has built-in functionality to discover the
+    system CGroup mount paths, so use this property only if YARN's automatic
+    mount path discovery does not work.
+
+    The path specified by this property must exist before the NodeManager is
+    launched.
+    If yarn.nodemanager.linux-container-executor.cgroups.mount is set to true,
+    YARN will first try to mount the CGroups at the specified path before
+    reading them.
+    If yarn.nodemanager.linux-container-executor.cgroups.mount is set to
+    false, YARN will read the CGroups at the specified path.
+    If this property is empty, YARN tries to detect the CGroups location.
+
+    Please refer to NodeManagerCgroups.html in the documentation for further
+    details.
+    This property only applies when the LCE resources handler is set to
+    CgroupsLCEResourcesHandler.
+    </description>
     <name>yarn.nodemanager.linux-container-executor.cgroups.mount-path</name>
   </property>
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/47b145b9/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/CGroupsHandler.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/CGroupsHandler.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/CGroupsHandler.java
index 8fc35a8..82bd366 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/CGroupsHandler.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/CGroupsHandler.java
@@ -23,6 +23,9 @@ package org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resourc
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 
+import java.util.HashSet;
+import java.util.Set;
+
 /**
  * Provides CGroups functionality. Implementations are expected to be
  * thread-safe
@@ -54,6 +57,18 @@ public interface CGroupsHandler {
     String getName() {
       return name;
     }
+
+    /**
+     * Get the list of valid cgroup names.
+     * @return The set of cgroup name strings
+     */
+    public static Set<String> getValidCGroups() {
+      HashSet<String> validCgroups = new HashSet<>();
+      for (CGroupController controller : CGroupController.values()) {
+        validCgroups.add(controller.getName());
+      }
+      return validCgroups;
+    }
   }
 
   String CGROUP_FILE_TASKS = "tasks";

http://git-wip-us.apache.org/repos/asf/hadoop/blob/47b145b9/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/CGroupsHandlerImpl.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/CGroupsHandlerImpl.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/CGroupsHandlerImpl.java
index 85b01cd..9fd20eb 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/CGroupsHandlerImpl.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/CGroupsHandlerImpl.java
@@ -83,7 +83,7 @@ class CGroupsHandlerImpl implements CGroupsHandler {
    * @param mtab mount file location
    * @throws ResourceHandlerException if initialization failed
    */
-  public CGroupsHandlerImpl(Configuration conf, PrivilegedOperationExecutor
+  CGroupsHandlerImpl(Configuration conf, PrivilegedOperationExecutor
       privilegedOperationExecutor, String mtab)
       throws ResourceHandlerException {
     this.cGroupPrefix = conf.get(YarnConfiguration.
@@ -115,7 +115,7 @@ class CGroupsHandlerImpl implements CGroupsHandler {
    *                                    PrivilegedContainerOperations
    * @throws ResourceHandlerException if initialization failed
    */
-  public CGroupsHandlerImpl(Configuration conf, PrivilegedOperationExecutor
+  CGroupsHandlerImpl(Configuration conf, PrivilegedOperationExecutor
       privilegedOperationExecutor) throws ResourceHandlerException {
     this(conf, privilegedOperationExecutor, MTAB_FILE);
   }
@@ -142,11 +142,18 @@ class CGroupsHandlerImpl implements CGroupsHandler {
     // the same hierarchy will be mounted at each mount point with the same
     // subsystem set.
 
-    Map<String, Set<String>> newMtab;
+    Map<String, Set<String>> newMtab = null;
     Map<CGroupController, String> cPaths;
     try {
-      // parse mtab
-      newMtab = parseMtab(mtabFile);
+      if (this.cGroupMountPath != null && !this.enableCGroupMount) {
+        newMtab = ResourceHandlerModule.
+            parseConfiguredCGroupPath(this.cGroupMountPath);
+      }
+
+      if (newMtab == null) {
+        // parse mtab
+        newMtab = parseMtab(mtabFile);
+      }
 
       // find cgroup controller paths
       cPaths = initializeControllerPathsFromMtab(newMtab);
@@ -203,10 +210,8 @@ class CGroupsHandlerImpl implements CGroupsHandler {
       throws IOException {
     Map<String, Set<String>> ret = new HashMap<>();
     BufferedReader in = null;
-    HashSet<String> validCgroups = new HashSet<>();
-    for (CGroupController controller : CGroupController.values()) {
-      validCgroups.add(controller.getName());
-    }
+    Set<String> validCgroups =
+        CGroupsHandler.CGroupController.getValidCGroups();
 
     try {
       FileInputStream fis = new FileInputStream(new File(mtab));
@@ -487,7 +492,8 @@ class CGroupsHandlerImpl implements CGroupsHandler {
       try (BufferedReader inl =
           new BufferedReader(new InputStreamReader(new FileInputStream(cgf
               + "/tasks"), "UTF-8"))) {
-        if ((str = inl.readLine()) != null) {
+        str = inl.readLine();
+        if (str != null) {
           LOG.debug("First line in cgroup tasks file: " + cgf + " " + str);
         }
       } catch (IOException e) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/47b145b9/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/ResourceHandlerModule.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/ResourceHandlerModule.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/ResourceHandlerModule.java
index 7fc04bd..4d137f0 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/ResourceHandlerModule.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/ResourceHandlerModule.java
@@ -31,6 +31,13 @@ import org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileg
 import org.apache.hadoop.yarn.server.nodemanager.util.CgroupsLCEResourcesHandler;
 import org.apache.hadoop.yarn.server.nodemanager.util.DefaultLCEResourcesHandler;
 
+import java.io.File;
+import java.io.IOException;
+import java.util.Set;
+import java.util.HashSet;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.Arrays;
 import java.util.ArrayList;
 import java.util.List;
 
@@ -113,8 +120,8 @@ public class ResourceHandlerModule {
   }
 
   private static TrafficControlBandwidthHandlerImpl
-    getTrafficControlBandwidthHandler(Configuration conf)
-      throws ResourceHandlerException {
+      getTrafficControlBandwidthHandler(Configuration conf)
+        throws ResourceHandlerException {
     if (conf.getBoolean(YarnConfiguration.NM_NETWORK_RESOURCE_ENABLED,
         YarnConfiguration.DEFAULT_NM_NETWORK_RESOURCE_ENABLED)) {
       if (trafficControlBandwidthHandler == null) {
@@ -137,8 +144,8 @@ public class ResourceHandlerModule {
   }
 
   public static OutboundBandwidthResourceHandler
-    getOutboundBandwidthResourceHandler(Configuration conf)
-      throws ResourceHandlerException {
+      getOutboundBandwidthResourceHandler(Configuration conf)
+        throws ResourceHandlerException {
     return getTrafficControlBandwidthHandler(conf);
   }
 
@@ -176,7 +183,7 @@ public class ResourceHandlerModule {
   }
 
   private static CGroupsMemoryResourceHandlerImpl
-    getCgroupsMemoryResourceHandler(
+      getCgroupsMemoryResourceHandler(
       Configuration conf) throws ResourceHandlerException {
     if (cGroupsMemoryResourceHandler == null) {
       synchronized (MemoryResourceHandler.class) {
@@ -229,4 +236,45 @@ public class ResourceHandlerModule {
   static void nullifyResourceHandlerChain() throws ResourceHandlerException {
     resourceHandlerChain = null;
   }
+
+  /**
+   * If a cgroup mount directory is specified, it returns cgroup directories
+   * with valid names.
+   * The requirement is that each hierarchy has to be named with the comma
+   * separated names of subsystems supported.
+   * For example: /sys/fs/cgroup/cpu,cpuacct
+   * @param cgroupMountPath Root cgroup mount path (/sys/fs/cgroup in the
+   *                        example above)
+   * @return A path to cgroup subsystem set mapping in the same format as
+   *         {@link CGroupsHandlerImpl#parseMtab(String)}
+   * @throws IOException if the specified directory cannot be listed
+   */
+  public static Map<String, Set<String>> parseConfiguredCGroupPath(
+      String cgroupMountPath) throws IOException {
+    File cgroupDir = new File(cgroupMountPath);
+    File[] list = cgroupDir.listFiles();
+    if (list == null) {
+      throw new IOException("Empty cgroup mount directory specified: " +
+          cgroupMountPath);
+    }
+
+    Map<String, Set<String>> pathSubsystemMappings = new HashMap<>();
+    Set<String> validCGroups =
+        CGroupsHandler.CGroupController.getValidCGroups();
+    for (File candidate: list) {
+      Set<String> cgroupList =
+          new HashSet<>(Arrays.asList(candidate.getName().split(",")));
+      // Collect the valid subsystem names
+      cgroupList.retainAll(validCGroups);
+      if (!cgroupList.isEmpty()) {
+        if (candidate.isDirectory() && candidate.canWrite()) {
+          pathSubsystemMappings.put(candidate.getAbsolutePath(), cgroupList);
+        } else {
+          LOG.warn("The following cgroup is not a directory or it is not"
+              + " writable" + candidate.getAbsolutePath());
+        }
+      }
+    }
+    return pathSubsystemMappings;
+  }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/47b145b9/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/util/CgroupsLCEResourcesHandler.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/util/CgroupsLCEResourcesHandler.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/util/CgroupsLCEResourcesHandler.java
index bca4fdc..7a89285 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/util/CgroupsLCEResourcesHandler.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/util/CgroupsLCEResourcesHandler.java
@@ -27,6 +27,7 @@ import java.io.InputStreamReader;
 import java.io.OutputStreamWriter;
 import java.io.PrintWriter;
 import java.io.Writer;
+import java.util.Arrays;
 import java.util.ArrayList;
 import java.util.Collections;
 import java.util.HashMap;
@@ -39,7 +40,6 @@ import java.util.regex.Pattern;
 
 import com.google.common.annotations.VisibleForTesting;
 
-import com.google.common.collect.Sets;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
@@ -51,6 +51,8 @@ import org.apache.hadoop.yarn.conf.YarnConfiguration;
 import org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor;
 import org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperation;
 import org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.CGroupsCpuResourceHandlerImpl;
+import org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.CGroupsHandler;
+import org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.ResourceHandlerModule;
 import org.apache.hadoop.yarn.util.Clock;
 import org.apache.hadoop.yarn.util.ResourceCalculatorPlugin;
 import org.apache.hadoop.yarn.util.SystemClock;
@@ -87,11 +89,11 @@ public class CgroupsLCEResourcesHandler implements LCEResourcesHandler {
 
   private long deleteCgroupTimeout;
   private long deleteCgroupDelay;
-  // package private for testing purposes
+  @VisibleForTesting
   Clock clock;
 
   private float yarnProcessors;
-  int nodeVCores;
+  private int nodeVCores;
 
   public CgroupsLCEResourcesHandler() {
     this.controllerPaths = new HashMap<String, String>();
@@ -132,8 +134,10 @@ public class CgroupsLCEResourcesHandler implements LCEResourcesHandler {
     this.strictResourceUsageMode =
         conf
           .getBoolean(
-            YarnConfiguration.NM_LINUX_CONTAINER_CGROUPS_STRICT_RESOURCE_USAGE,
-            YarnConfiguration.DEFAULT_NM_LINUX_CONTAINER_CGROUPS_STRICT_RESOURCE_USAGE);
+            YarnConfiguration
+                .NM_LINUX_CONTAINER_CGROUPS_STRICT_RESOURCE_USAGE,
+            YarnConfiguration
+                .DEFAULT_NM_LINUX_CONTAINER_CGROUPS_STRICT_RESOURCE_USAGE);
 
     int len = cgroupPrefix.length();
     if (cgroupPrefix.charAt(len - 1) == '/') {
@@ -169,8 +173,10 @@ public class CgroupsLCEResourcesHandler implements LCEResourcesHandler {
     if (systemProcessors != (int) yarnProcessors) {
       LOG.info("YARN containers restricted to " + yarnProcessors + " cores");
       int[] limits = getOverallLimits(yarnProcessors);
-      updateCgroup(CONTROLLER_CPU, "", CPU_PERIOD_US, String.valueOf(limits[0]));
-      updateCgroup(CONTROLLER_CPU, "", CPU_QUOTA_US, String.valueOf(limits[1]));
+      updateCgroup(CONTROLLER_CPU, "", CPU_PERIOD_US,
+          String.valueOf(limits[0]));
+      updateCgroup(CONTROLLER_CPU, "", CPU_QUOTA_US,
+          String.valueOf(limits[1]));
     } else if (CGroupsCpuResourceHandlerImpl.cpuLimitsExist(
         pathForCgroup(CONTROLLER_CPU, ""))) {
       LOG.info("Removing CPU constraints for YARN containers.");
@@ -178,8 +184,8 @@ public class CgroupsLCEResourcesHandler implements LCEResourcesHandler {
     }
   }
 
-  int[] getOverallLimits(float yarnProcessors) {
-    return CGroupsCpuResourceHandlerImpl.getOverallLimits(yarnProcessors);
+  int[] getOverallLimits(float yarnProcessorsArg) {
+    return CGroupsCpuResourceHandlerImpl.getOverallLimits(yarnProcessorsArg);
   }
 
 
@@ -204,7 +210,7 @@ public class CgroupsLCEResourcesHandler implements LCEResourcesHandler {
       LOG.debug("createCgroup: " + path);
     }
 
-    if (! new File(path).mkdir()) {
+    if (!new File(path).mkdir()) {
       throw new IOException("Failed to create cgroup at " + path);
     }
   }
@@ -251,7 +257,8 @@ public class CgroupsLCEResourcesHandler implements LCEResourcesHandler {
       try (BufferedReader inl =
             new BufferedReader(new InputStreamReader(new FileInputStream(cgf
               + "/tasks"), "UTF-8"))) {
-        if ((str = inl.readLine()) != null) {
+        str = inl.readLine();
+        if (str != null) {
           LOG.debug("First line in cgroup tasks file: " + cgf + " " + str);
         }
       } catch (IOException e) {
@@ -337,9 +344,9 @@ public class CgroupsLCEResourcesHandler implements LCEResourcesHandler {
               (containerVCores * yarnProcessors) / (float) nodeVCores;
           int[] limits = getOverallLimits(containerCPU);
           updateCgroup(CONTROLLER_CPU, containerName, CPU_PERIOD_US,
-            String.valueOf(limits[0]));
+              String.valueOf(limits[0]));
           updateCgroup(CONTROLLER_CPU, containerName, CPU_QUOTA_US,
-            String.valueOf(limits[1]));
+              String.valueOf(limits[1]));
         }
       }
     }
@@ -400,6 +407,8 @@ public class CgroupsLCEResourcesHandler implements LCEResourcesHandler {
   private Map<String, Set<String>> parseMtab() throws IOException {
     Map<String, Set<String>> ret = new HashMap<String, Set<String>>();
     BufferedReader in = null;
+    Set<String> validCgroups =
+        CGroupsHandler.CGroupController.getValidCGroups();
 
     try {
       FileInputStream fis = new FileInputStream(new File(getMtabFileName()));
@@ -415,8 +424,11 @@ public class CgroupsLCEResourcesHandler implements LCEResourcesHandler {
           String options = m.group(3);
 
           if (type.equals(CGROUPS_FSTYPE)) {
-            HashSet<String> value = Sets.newHashSet(options.split(","));
-            ret.put(path, value);
+            Set<String> cgroupList =
+                new HashSet<>(Arrays.asList(options.split(",")));
+            // Collect the valid subsystem names
+            cgroupList.retainAll(validCgroups);
+            ret.put(path, cgroupList);
           }
         }
       }
@@ -448,7 +460,16 @@ public class CgroupsLCEResourcesHandler implements LCEResourcesHandler {
 
   private void initializeControllerPaths() throws IOException {
     String controllerPath;
-    Map<String, Set<String>> parsedMtab = parseMtab();
+    Map<String, Set<String>> parsedMtab = null;
+
+    if (this.cgroupMountPath != null && !this.cgroupMount) {
+      parsedMtab = ResourceHandlerModule.
+          parseConfiguredCGroupPath(this.cgroupMountPath);
+    }
+
+    if (parsedMtab == null) {
+      parsedMtab = parseMtab();
+    }
 
     // CPU
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/47b145b9/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/TestCGroupsHandlerImpl.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/TestCGroupsHandlerImpl.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/TestCGroupsHandlerImpl.java
index 7a4d39f..ab989cf 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/TestCGroupsHandlerImpl.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/TestCGroupsHandlerImpl.java
@@ -573,4 +573,29 @@ public class TestCGroupsHandlerImpl {
         new File(new File(newMountPoint, "cpu"), this.hierarchy);
     assertTrue("Yarn cgroup should exist", hierarchyFile.exists());
   }
-}
+
+
+  @Test
+  public void testManualCgroupSetting() throws ResourceHandlerException {
+    YarnConfiguration conf = new YarnConfiguration();
+    conf.set(YarnConfiguration.NM_LINUX_CONTAINER_CGROUPS_MOUNT_PATH, tmpPath);
+    conf.set(YarnConfiguration.NM_LINUX_CONTAINER_CGROUPS_HIERARCHY,
+        "/hadoop-yarn");
+    File cpu = new File(new File(tmpPath, "cpuacct,cpu"), "/hadoop-yarn");
+
+    try {
+      Assert.assertTrue("temp dir should be created", cpu.mkdirs());
+
+      CGroupsHandlerImpl cGroupsHandler = new CGroupsHandlerImpl(conf, null);
+      cGroupsHandler.initializeCGroupController(
+              CGroupsHandler.CGroupController.CPU);
+
+      Assert.assertEquals("CPU CGRoup path was not set", cpu.getAbsolutePath(),
+              new File(cGroupsHandler.getPathForCGroup(
+                  CGroupsHandler.CGroupController.CPU, "")).getAbsolutePath());
+
+    } finally {
+      FileUtils.deleteQuietly(cpu);
+    }
+  }
+}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/47b145b9/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/util/TestCgroupsLCEResourcesHandler.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/util/TestCgroupsLCEResourcesHandler.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/util/TestCgroupsLCEResourcesHandler.java
index 1ed8fd8..7d8704f 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/util/TestCgroupsLCEResourcesHandler.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/util/TestCgroupsLCEResourcesHandler.java
@@ -41,6 +41,8 @@ import java.util.Scanner;
 import java.util.Set;
 import java.util.concurrent.CountDownLatch;
 
+import static org.mockito.Mockito.when;
+
 @Deprecated
 public class TestCgroupsLCEResourcesHandler {
   private static File cgroupDir = null;
@@ -388,4 +390,33 @@ public class TestCgroupsLCEResourcesHandler {
       FileUtils.deleteQuietly(memory);
     }
   }
+
+  @Test
+  public void testManualCgroupSetting() throws IOException {
+    CgroupsLCEResourcesHandler handler = new CgroupsLCEResourcesHandler();
+    YarnConfiguration conf = new YarnConfiguration();
+    conf.set(YarnConfiguration.NM_LINUX_CONTAINER_CGROUPS_MOUNT_PATH,
+        cgroupDir.getAbsolutePath());
+    handler.setConf(conf);
+    File cpu = new File(new File(cgroupDir, "cpuacct,cpu"), "/hadoop-yarn");
+
+    try {
+      Assert.assertTrue("temp dir should be created", cpu.mkdirs());
+
+      final int numProcessors = 4;
+      ResourceCalculatorPlugin plugin =
+              Mockito.mock(ResourceCalculatorPlugin.class);
+      Mockito.doReturn(numProcessors).when(plugin).getNumProcessors();
+      Mockito.doReturn(numProcessors).when(plugin).getNumCores();
+      when(plugin.getNumProcessors()).thenReturn(8);
+      handler.init(null, plugin);
+
+      Assert.assertEquals("CPU CGRoup path was not set", cpu.getParent(),
+          handler.getControllerPaths().get("cpu"));
+
+    } finally {
+      FileUtils.deleteQuietly(cpu);
+    }
+  }
+
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/47b145b9/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/GracefulDecommission.md
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/GracefulDecommission.md b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/GracefulDecommission.md
index 2acb3d2..2e83ca2 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/GracefulDecommission.md
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/GracefulDecommission.md
@@ -13,7 +13,7 @@
 -->
 
 
-Graceful Decommission of Yarn Nodes
+Graceful Decommission of YARN Nodes
 ===============
 
 * [Overview](#overview)
@@ -29,19 +29,19 @@ Graceful Decommission of Yarn Nodes
 Overview
 --------
 
-Yarn is scalable very easily: any new NodeManager could join to the configured ResourceManager and start to execute jobs. But to achieve full elasticity we need a decommissioning process which helps to remove existing nodes and down-scale the cluster.
+YARN is scalable very easily: any new NodeManager could join to the configured ResourceManager and start to execute jobs. But to achieve full elasticity we need a decommissioning process which helps to remove existing nodes and down-scale the cluster.
 
-Yarn Nodes could be decommissioned NORMAL or GRACEFUL.
+YARN Nodes could be decommissioned NORMAL or GRACEFUL.
 
-Normal Decommission of Yarn Nodes means an immediate shutdown.
+Normal Decommission of YARN Nodes means an immediate shutdown.
 
-Graceful Decommission of Yarn Nodes is the mechanism to decommission NMs while minimize the impact to running applications. Once a node is in DECOMMISSIONING state, RM won't schedule new containers on it and will wait for running containers and applications to complete (or until decommissioning timeout exceeded) before transition the node into DECOMMISSIONED.
+Graceful Decommission of YARN Nodes is the mechanism to decommission NMs while minimize the impact to running applications. Once a node is in DECOMMISSIONING state, RM won't schedule new containers on it and will wait for running containers and applications to complete (or until decommissioning timeout exceeded) before transition the node into DECOMMISSIONED.
 
 ## Quick start
 
 To do a normal decommissioning:
 
-1. Start a Yarn cluster (with NodeManageres and ResourceManager)
+1. Start a YARN cluster (with NodeManageres and ResourceManager)
 2. Start a yarn job (for example with `yarn jar...` )
 3. Add `yarn.resourcemanager.nodes.exclude-path` property to your `yarn-site.xml` (Note: you don't need to restart the ResourceManager)
 4. Create a text file (the location is defined in the previous step) with one line which contains the name of a selected NodeManager 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/47b145b9/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/NodeManagerCgroups.md
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/NodeManagerCgroups.md b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/NodeManagerCgroups.md
index 2704f10..d362801 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/NodeManagerCgroups.md
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/NodeManagerCgroups.md
@@ -17,7 +17,7 @@ Using CGroups with YARN
 
 <!-- MACRO{toc|fromDepth=0|toDepth=3} -->
 
-CGroups is a mechanism for aggregating/partitioning sets of tasks, and all their future children, into hierarchical groups with specialized behaviour. CGroups is a Linux kernel feature and was merged into kernel version 2.6.24. From a YARN perspective, this allows containers to be limited in their resource usage. A good example of this is CPU usage. Without CGroups, it becomes hard to limit container CPU usage. Currently, CGroups is only used for limiting CPU usage.
+CGroups is a mechanism for aggregating/partitioning sets of tasks, and all their future children, into hierarchical groups with specialized behaviour. CGroups is a Linux kernel feature and was merged into kernel version 2.6.24. From a YARN perspective, this allows containers to be limited in their resource usage. A good example of this is CPU usage. Without CGroups, it becomes hard to limit container CPU usage.
 
 CGroups Configuration
 ---------------------
@@ -30,9 +30,9 @@ The following settings are related to setting up CGroups. These need to be set i
 |:---- |:---- |
 | `yarn.nodemanager.container-executor.class` | This should be set to "org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor". CGroups is a Linux kernel feature and is exposed via the LinuxContainerExecutor. |
 | `yarn.nodemanager.linux-container-executor.resources-handler.class` | This should be set to "org.apache.hadoop.yarn.server.nodemanager.util.CgroupsLCEResourcesHandler". Using the LinuxContainerExecutor doesn't force you to use CGroups. If you wish to use CGroups, the resource-handler-class must be set to CGroupsLCEResourceHandler. |
-| `yarn.nodemanager.linux-container-executor.cgroups.hierarchy` | The cgroups hierarchy under which to place YARN proccesses(cannot contain commas). If yarn.nodemanager.linux-container-executor.cgroups.mount is false (that is, if cgroups have been pre-configured) and the Yarn user has write access to the parent directory, then the directory will be created. If the directory already exists, the administrator has to give Yarn write permissions to it recursively. |
+| `yarn.nodemanager.linux-container-executor.cgroups.hierarchy` | The cgroups hierarchy under which to place YARN proccesses(cannot contain commas). If yarn.nodemanager.linux-container-executor.cgroups.mount is false (that is, if cgroups have been pre-configured) and the YARN user has write access to the parent directory, then the directory will be created. If the directory already exists, the administrator has to give YARN write permissions to it recursively. |
 | `yarn.nodemanager.linux-container-executor.cgroups.mount` | Whether the LCE should attempt to mount cgroups if not found - can be true or false. |
-| `yarn.nodemanager.linux-container-executor.cgroups.mount-path` | Where the LCE should attempt to mount cgroups if not found. Common locations include /sys/fs/cgroup and /cgroup; the default location can vary depending on the Linux distribution in use. This path must exist before the NodeManager is launched. Only used when the LCE resources handler is set to the CgroupsLCEResourcesHandler, and yarn.nodemanager.linux-container-executor.cgroups.mount is true. A point to note here is that the container-executor binary will try to mount the path specified + "/" + the subsystem. In our case, since we are trying to limit CPU the binary tries to mount the path specified + "/cpu" and that's the path it expects to exist. |
+| `yarn.nodemanager.linux-container-executor.cgroups.mount-path` | Optional. Where CGroups are located. LCE will try to mount them here, if `yarn.nodemanager.linux-container-executor.cgroups.mount` is true. LCE will try to use CGroups from this location, if `yarn.nodemanager.linux-container-executor.cgroups.mount` is false. If specified, this path and its subdirectories (CGroup hierarchies) must exist and they should be readable and writable by YARN before the NodeManager is launched. See CGroups mount options below for details. |
 | `yarn.nodemanager.linux-container-executor.group` | The Unix group of the NodeManager. It should match the setting in "container-executor.cfg". This configuration is required for validating the secure access of the container-executor binary. |
 
 The following settings are related to limiting resource usage of YARN containers:
@@ -42,6 +42,17 @@ The following settings are related to limiting resource usage of YARN containers
 | `yarn.nodemanager.resource.percentage-physical-cpu-limit` | This setting lets you limit the cpu usage of all YARN containers. It sets a hard upper limit on the cumulative CPU usage of the containers. For example, if set to 60, the combined CPU usage of all YARN containers will not exceed 60%. |
 | `yarn.nodemanager.linux-container-executor.cgroups.strict-resource-usage` | CGroups allows cpu usage limits to be hard or soft. When this setting is true, containers cannot use more CPU usage than allocated even if spare CPU is available. This ensures that containers can only use CPU that they were allocated. When set to false, containers can use spare CPU if available. It should be noted that irrespective of whether set to true or false, at no time can the combined CPU usage of all containers exceed the value specified in "yarn.nodemanager.resource.percentage-physical-cpu-limit". |
 
+CGroups mount options
+---------------------
+
+YARN uses CGroups through a directory structure mounted into the file system by the kernel. There are three options to attach to CGroups.
+
+| Option | Description |
+|:---- |:---- |
+| Discover CGroups mounted already | This should be used on newer systems like RHEL7 or Ubuntu16 or if the administrator mounts CGroups before YARN starts. Set `yarn.nodemanager.linux-container-executor.cgroups.mount` to false and leave other settings set to their defaults. YARN will locate the mount points in `/proc/mounts`. Common locations include `/sys/fs/cgroup` and `/cgroup`. The default location can vary depending on the Linux distribution in use.|
+| CGroups mounted by YARN | If the system does not have CGroups mounted or it is mounted to an inaccessible location then point `yarn.nodemanager.linux-container-executor.cgroups.mount-path` to an empty directory. Set `yarn.nodemanager.linux-container-executor.cgroups.mount` to true. A point to note here is that the container-executor binary will try to create and mount each subsystem as a subdirectory under this path. If `cpu` is already mounted somewhere with `cpuacct`, then the directory `cpu,cpuacct` will be created for the hierarchy.|
+| CGroups mounted already or linked but not in `/proc/mounts` | If cgroups is accessible through lxcfs or simulated by another filesystem, then point `yarn.nodemanager.linux-container-executor.cgroups.mount-path` to your CGroups root directory. Set `yarn.nodemanager.linux-container-executor.cgroups.mount` to false. YARN tries to use this path first, before any CGroup mount point discovery. The path should have a subdirectory for each CGroup hierarchy named by the comma separated CGroup subsystems supported like `<path>/cpu,cpuacct`. Valid subsystem names are `cpu, cpuacct, cpuset, memory, net_cls, blkio, freezer, devices`.|
+
 CGroups and security
 --------------------
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/47b145b9/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/WritingYarnApplications.md
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/WritingYarnApplications.md b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/WritingYarnApplications.md
index 07c3765..f1308d5 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/WritingYarnApplications.md
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/WritingYarnApplications.md
@@ -56,7 +56,7 @@ Following are the important interfaces:
 
 * Under very rare circumstances, programmer may want to directly use the 3 protocols to implement an application. However, note that *such behaviors are no longer encouraged for general use cases*.
 
-Writing a Simple Yarn Application
+Writing a Simple YARN Application
 ---------------------------------
 
 ### Writing a simple Client
@@ -574,4 +574,4 @@ Useful Links
 Sample Code
 -----------
 
-Yarn distributed shell: in `hadoop-yarn-applications-distributedshell` project after you set up your development environment.
+YARN distributed shell: in `hadoop-yarn-applications-distributedshell` project after you set up your development environment.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/47b145b9/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/registry/yarn-registry.md
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/registry/yarn-registry.md b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/registry/yarn-registry.md
index f5055d9..4973862 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/registry/yarn-registry.md
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/registry/yarn-registry.md
@@ -84,7 +84,7 @@ container ID.
 
 ## The binding problem
 Hadoop YARN allows applications to run on the Hadoop cluster. Some of these are
-batch jobs or queries that can managed via Yarn’s existing API using its
+batch jobs or queries that can managed via YARN’s existing API using its
 application ID. In addition YARN can deploy ong-lived services instances such a
 pool of Apache Tomcat web servers or an Apache HBase cluster. YARN will deploy
 them across the cluster depending on the individual each component requirements
@@ -121,7 +121,7 @@ services accessible from within the Hadoop cluster
         /services/yarn
         /services/oozie
 
-Yarn-deployed services belonging to individual users.
+YARN-deployed services belonging to individual users.
 
         /users/joe/org-apache-hbase/demo1
         /users/joe/org-apache-hbase/demo1/components/regionserver1
@@ -148,7 +148,7 @@ their application master, to which they heartbeat regularly.
 
 ## Unsupported Registration use cases:
 
-1. A short-lived Yarn application is registered automatically in the registry,
+1. A short-lived YARN application is registered automatically in the registry,
 including all its containers. and unregistered when the job terminates.
 Short-lived applications with many containers will place excessive load on a
 registry. All YARN applications will be given the option of registering, but it
@@ -259,7 +259,7 @@ service since it supports many of the properties, We pick a part of the ZK
 namespace to be the root of the service registry ( default: `yarnRegistry`).
 
 On top this base implementation we build our registry service API and the
-naming conventions that Yarn will use for its services. The registry will be
+naming conventions that YARN will use for its services. The registry will be
 accessed by the registry API, not directly via ZK - ZK is just an
 implementation choice (although unlikely to change in the future).
 
@@ -297,7 +297,7 @@ them.
 6. Core services will be registered using the following convention:
 `/services/{servicename}` e.g. `/services/hdfs`.
 
-7. Yarn services SHOULD be registered using the following convention:
+7. YARN services SHOULD be registered using the following convention:
 
         /users/{username}/{serviceclass}/{instancename}
 
@@ -823,8 +823,8 @@ The `RegistryPathStatus` class summarizes the contents of a node in the registry
 ## Security
 
 The registry will allow a service instance can only be registered under the
-path where it has permissions. Yarn will create directories with appropriate
-permissions for users where Yarn deployed services can be registered by a user.
+path where it has permissions. YARN will create directories with appropriate
+permissions for users where YARN deployed services can be registered by a user.
 of the user account of the service instance. The admin will also create
 directories (such as `/services`) with appropriate permissions (where core Hadoop
 services can register themselves.


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[42/50] [abbrv] hadoop git commit: HADOOP-14743. CompositeGroupsMapping should not swallow exceptions. Contributed by Wei-Chiu Chuang.

Posted by wa...@apache.org.
HADOOP-14743. CompositeGroupsMapping should not swallow exceptions. Contributed by Wei-Chiu Chuang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a8b75466
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a8b75466
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a8b75466

Branch: refs/heads/YARN-5881
Commit: a8b75466b21edfe8b12beb4420492817f0e03147
Parents: 54356b1
Author: Wei-Chiu Chuang <we...@apache.org>
Authored: Thu Aug 10 09:35:27 2017 -0700
Committer: Wei-Chiu Chuang <we...@apache.org>
Committed: Thu Aug 10 09:35:27 2017 -0700

----------------------------------------------------------------------
 .../java/org/apache/hadoop/security/CompositeGroupsMapping.java  | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a8b75466/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/CompositeGroupsMapping.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/CompositeGroupsMapping.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/CompositeGroupsMapping.java
index b8cfdf7..b762df2 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/CompositeGroupsMapping.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/CompositeGroupsMapping.java
@@ -74,7 +74,9 @@ public class CompositeGroupsMapping
       try {
         groups = provider.getGroups(user);
       } catch (Exception e) {
-        //LOG.warn("Exception trying to get groups for user " + user, e);      
+        LOG.warn("Unable to get groups for user {} via {} because: {}",
+            user, provider.getClass().getSimpleName(), e.toString());
+        LOG.debug("Stacktrace: ", e);
       }        
       if (groups != null && ! groups.isEmpty()) {
         groupSet.addAll(groups);


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[27/50] [abbrv] hadoop git commit: YARN-6970. Add PoolInitializationException as retriable exception in FederationFacade. (Giovanni Matteo Fumarola via Subru).

Posted by wa...@apache.org.
YARN-6970. Add PoolInitializationException as retriable exception in FederationFacade. (Giovanni Matteo Fumarola via Subru).


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ad2a3506
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ad2a3506
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ad2a3506

Branch: refs/heads/YARN-5881
Commit: ad2a3506626728a6be47af0db3ca60610a568734
Parents: 1db4788
Author: Subru Krishnan <su...@apache.org>
Authored: Tue Aug 8 16:48:29 2017 -0700
Committer: Subru Krishnan <su...@apache.org>
Committed: Tue Aug 8 16:48:29 2017 -0700

----------------------------------------------------------------------
 .../utils/FederationStateStoreFacade.java       |  2 ++
 .../TestFederationStateStoreFacadeRetry.java    | 24 ++++++++++++++++++++
 2 files changed, 26 insertions(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ad2a3506/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/utils/FederationStateStoreFacade.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/utils/FederationStateStoreFacade.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/utils/FederationStateStoreFacade.java
index 389c769..682eb14 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/utils/FederationStateStoreFacade.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/utils/FederationStateStoreFacade.java
@@ -70,6 +70,7 @@ import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 import com.google.common.annotations.VisibleForTesting;
+import com.zaxxer.hikari.pool.HikariPool.PoolInitializationException;
 
 /**
  *
@@ -162,6 +163,7 @@ public final class FederationStateStoreFacade {
     exceptionToPolicyMap.put(FederationStateStoreRetriableException.class,
         basePolicy);
     exceptionToPolicyMap.put(CacheLoaderException.class, basePolicy);
+    exceptionToPolicyMap.put(PoolInitializationException.class, basePolicy);
 
     RetryPolicy retryPolicy = RetryPolicies.retryByException(
         RetryPolicies.TRY_ONCE_THEN_FAIL, exceptionToPolicyMap);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ad2a3506/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/federation/utils/TestFederationStateStoreFacadeRetry.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/federation/utils/TestFederationStateStoreFacadeRetry.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/federation/utils/TestFederationStateStoreFacadeRetry.java
index 304910e..ea43268 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/federation/utils/TestFederationStateStoreFacadeRetry.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/federation/utils/TestFederationStateStoreFacadeRetry.java
@@ -30,6 +30,8 @@ import org.apache.hadoop.yarn.server.federation.store.exception.FederationStateS
 import org.junit.Assert;
 import org.junit.Test;
 
+import com.zaxxer.hikari.pool.HikariPool.PoolInitializationException;
+
 /**
  * Test class to validate FederationStateStoreFacade retry policy.
  */
@@ -119,4 +121,26 @@ public class TestFederationStateStoreFacadeRetry {
         policy.shouldRetry(new CacheLoaderException(""), maxRetries, 0, false);
     Assert.assertEquals(RetryAction.FAIL.action, action.action);
   }
+
+  /*
+   * Test to validate that PoolInitializationException is a retriable exception.
+   */
+  @Test
+  public void testFacadePoolInitRetriableException() throws Exception {
+    // PoolInitializationException is a retriable exception
+    conf = new Configuration();
+    conf.setInt(YarnConfiguration.CLIENT_FAILOVER_RETRIES, maxRetries);
+    RetryPolicy policy = FederationStateStoreFacade.createRetryPolicy(conf);
+    RetryAction action = policy.shouldRetry(
+        new PoolInitializationException(new YarnException()), 0, 0, false);
+    // We compare only the action, delay and the reason are random value
+    // during this test
+    Assert.assertEquals(RetryAction.RETRY.action, action.action);
+
+    // After maxRetries we stop to retry
+    action =
+        policy.shouldRetry(new PoolInitializationException(new YarnException()),
+            maxRetries, 0, false);
+    Assert.assertEquals(RetryAction.FAIL.action, action.action);
+  }
 }


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[30/50] [abbrv] hadoop git commit: HADOOP-14355. Update maven-war-plugin to 3.1.0.

Posted by wa...@apache.org.
HADOOP-14355. Update maven-war-plugin to 3.1.0.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/07694fc6
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/07694fc6
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/07694fc6

Branch: refs/heads/YARN-5881
Commit: 07694fc65ae6d97a430a7dd67a6277e5795c321f
Parents: ebabc70
Author: Akira Ajisaka <aa...@apache.org>
Authored: Wed Aug 9 13:20:03 2017 +0900
Committer: Akira Ajisaka <aa...@apache.org>
Committed: Wed Aug 9 13:20:03 2017 +0900

----------------------------------------------------------------------
 hadoop-project/pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/07694fc6/hadoop-project/pom.xml
----------------------------------------------------------------------
diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 5aabdc7..8151016 100755
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -126,7 +126,7 @@
     <maven-resources-plugin.version>2.6</maven-resources-plugin.version>
     <maven-shade-plugin.version>2.4.3</maven-shade-plugin.version>
     <maven-jar-plugin.version>2.5</maven-jar-plugin.version>
-    <maven-war-plugin.version>2.4</maven-war-plugin.version>
+    <maven-war-plugin.version>3.1.0</maven-war-plugin.version>
     <maven-source-plugin.version>2.3</maven-source-plugin.version>
     <maven-pdf-plugin.version>1.2</maven-pdf-plugin.version>
     <maven-remote-resources-plugin.version>1.5</maven-remote-resources-plugin.version>


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[20/50] [abbrv] hadoop git commit: YARN-6890. Not display killApp button on UI if UI is unsecured but cluster is secured. Contributed by Junping Du

Posted by wa...@apache.org.
YARN-6890. Not display killApp button on UI if UI is unsecured but cluster is secured. Contributed by Junping Du


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/acf9bd8b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/acf9bd8b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/acf9bd8b

Branch: refs/heads/YARN-5881
Commit: acf9bd8b1d87b9c46821ecf0461d8dcd0a6ec6d6
Parents: 47b145b
Author: Jian He <ji...@apache.org>
Authored: Tue Aug 8 11:09:38 2017 -0700
Committer: Jian He <ji...@apache.org>
Committed: Tue Aug 8 11:09:38 2017 -0700

----------------------------------------------------------------------
 .../hadoop/fs/CommonConfigurationKeysPublic.java      |  2 ++
 .../apache/hadoop/yarn/server/webapp/AppBlock.java    | 14 +++++++++++++-
 2 files changed, 15 insertions(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/acf9bd8b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
index e8d4b4c..4fda2b8 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
@@ -608,6 +608,8 @@ public class CommonConfigurationKeysPublic {
    */
   public static final String HADOOP_TOKEN_FILES =
       "hadoop.token.files";
+  public static final String HADOOP_HTTP_AUTHENTICATION_TYPE =
+    "hadoop.http.authentication.type";
 
   /**
    * @see

http://git-wip-us.apache.org/repos/asf/hadoop/blob/acf9bd8b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/AppBlock.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/AppBlock.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/AppBlock.java
index d4090aa..693aa04 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/AppBlock.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/AppBlock.java
@@ -30,6 +30,7 @@ import org.apache.commons.lang.StringEscapeUtils;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.CommonConfigurationKeys;
 import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.security.authentication.client.AuthenticationException;
 import org.apache.hadoop.security.http.RestCsrfPreventionFilter;
@@ -70,6 +71,8 @@ public class AppBlock extends HtmlBlock {
   protected ApplicationBaseProtocol appBaseProt;
   protected Configuration conf;
   protected ApplicationId appID = null;
+  private boolean unsecuredUI = true;
+
 
   @Inject
   protected AppBlock(ApplicationBaseProtocol appBaseProt, ViewContext ctx,
@@ -77,6 +80,9 @@ public class AppBlock extends HtmlBlock {
     super(ctx);
     this.appBaseProt = appBaseProt;
     this.conf = conf;
+    // check if UI is unsecured.
+    String httpAuth = conf.get(CommonConfigurationKeys.HADOOP_HTTP_AUTHENTICATION_TYPE);
+    this.unsecuredUI = (httpAuth != null) && httpAuth.equals("simple");
   }
 
   @Override
@@ -129,10 +135,16 @@ public class AppBlock extends HtmlBlock {
 
     setTitle(join("Application ", aid));
 
+    // YARN-6890. for secured cluster allow anonymous UI access, application kill
+    // shouldn't be there.
+    boolean unsecuredUIForSecuredCluster = UserGroupInformation.isSecurityEnabled()
+        && this.unsecuredUI;
+
     if (webUiType != null
         && webUiType.equals(YarnWebParams.RM_WEB_UI)
         && conf.getBoolean(YarnConfiguration.RM_WEBAPP_UI_ACTIONS_ENABLED,
-          YarnConfiguration.DEFAULT_RM_WEBAPP_UI_ACTIONS_ENABLED)) {
+          YarnConfiguration.DEFAULT_RM_WEBAPP_UI_ACTIONS_ENABLED)
+            && !unsecuredUIForSecuredCluster) {
       // Application Kill
       html.div()
         .button()


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[07/50] [abbrv] hadoop git commit: YARN-6951. Fix debug log when Resource Handler chain is enabled. Contributed by Yang Wang.

Posted by wa...@apache.org.
YARN-6951. Fix debug log when Resource Handler chain is enabled. Contributed by Yang Wang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/46b7054f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/46b7054f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/46b7054f

Branch: refs/heads/YARN-5881
Commit: 46b7054fa7eae9129c21c9f3dc70acff46bfdc41
Parents: d91b7a8
Author: Sunil G <su...@apache.org>
Authored: Mon Aug 7 13:15:46 2017 +0530
Committer: Sunil G <su...@apache.org>
Committed: Mon Aug 7 13:15:46 2017 +0530

----------------------------------------------------------------------
 .../hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java     | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/46b7054f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java
index 2aaa835..b3e13b4 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java
@@ -307,7 +307,7 @@ public class LinuxContainerExecutor extends ContainerExecutor {
           .getConfiguredResourceHandlerChain(conf);
       if (LOG.isDebugEnabled()) {
         LOG.debug("Resource handler chain enabled = " + (resourceHandlerChain
-            == null));
+            != null));
       }
       if (resourceHandlerChain != null) {
         LOG.debug("Bootstrapping resource handler chain");


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[25/50] [abbrv] hadoop git commit: HADOOP-14715. TestWasbRemoteCallHelper failing. Contributed by Esfandiar Manii.

Posted by wa...@apache.org.
HADOOP-14715. TestWasbRemoteCallHelper failing.
Contributed by Esfandiar Manii.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f4e1aa05
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f4e1aa05
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f4e1aa05

Branch: refs/heads/YARN-5881
Commit: f4e1aa0508cadcf9d4ecc4053d8c1cf6ddd6c31b
Parents: 71b8dda
Author: Steve Loughran <st...@apache.org>
Authored: Tue Aug 8 23:37:47 2017 +0100
Committer: Steve Loughran <st...@apache.org>
Committed: Tue Aug 8 23:37:47 2017 +0100

----------------------------------------------------------------------
 .../apache/hadoop/fs/azure/TestWasbRemoteCallHelper.java |  7 +++++--
 .../hadoop-azure/src/test/resources/azure-test.xml       | 11 +++++++----
 2 files changed, 12 insertions(+), 6 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f4e1aa05/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/TestWasbRemoteCallHelper.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/TestWasbRemoteCallHelper.java b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/TestWasbRemoteCallHelper.java
index 393dcfd..8aad9e9 100644
--- a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/TestWasbRemoteCallHelper.java
+++ b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/TestWasbRemoteCallHelper.java
@@ -282,6 +282,8 @@ public class TestWasbRemoteCallHelper
   @Test
   public void testWhenOneInstanceIsDown() throws Throwable {
 
+    boolean isAuthorizationCachingEnabled = fs.getConf().getBoolean(CachingAuthorizer.KEY_AUTH_SERVICE_CACHING_ENABLE, false);
+
     // set up mocks
     HttpClient mockHttpClient = Mockito.mock(HttpClient.class);
     HttpEntity mockHttpEntity = Mockito.mock(HttpEntity.class);
@@ -356,8 +358,9 @@ public class TestWasbRemoteCallHelper
 
     performop(mockHttpClient);
 
-    Mockito.verify(mockHttpClient, times(2)).execute(Mockito.argThat(new HttpGetForServiceLocal()));
-    Mockito.verify(mockHttpClient, times(2)).execute(Mockito.argThat(new HttpGetForService2()));
+    int expectedNumberOfInvocations = isAuthorizationCachingEnabled ? 1 : 2;
+    Mockito.verify(mockHttpClient, times(expectedNumberOfInvocations)).execute(Mockito.argThat(new HttpGetForServiceLocal()));
+    Mockito.verify(mockHttpClient, times(expectedNumberOfInvocations)).execute(Mockito.argThat(new HttpGetForService2()));
   }
 
   @Test

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f4e1aa05/hadoop-tools/hadoop-azure/src/test/resources/azure-test.xml
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-azure/src/test/resources/azure-test.xml b/hadoop-tools/hadoop-azure/src/test/resources/azure-test.xml
index 8c88743..8cea256 100644
--- a/hadoop-tools/hadoop-azure/src/test/resources/azure-test.xml
+++ b/hadoop-tools/hadoop-azure/src/test/resources/azure-test.xml
@@ -29,10 +29,13 @@
     </property>
   -->
 
-  <property>
-    <name>fs.azure.secure.mode</name>
-    <value>true</value>
-  </property>
+  <!-- uncomment to test in Azure secure mode -->
+  <!--
+    <property>
+      <name>fs.azure.secure.mode</name>
+      <value>true</value>
+    </property>
+  -->
 
   <property>
     <name>fs.azure.user.agent.prefix</name>


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[26/50] [abbrv] hadoop git commit: HADOOP-14598. Blacklist Http/HttpsFileSystem in FsUrlStreamHandlerFactory. Contributed by Steve Loughran.

Posted by wa...@apache.org.
HADOOP-14598. Blacklist Http/HttpsFileSystem in FsUrlStreamHandlerFactory. Contributed by Steve Loughran.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1db4788b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1db4788b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1db4788b

Branch: refs/heads/YARN-5881
Commit: 1db4788b7d22e57f91520e4a6971774ef84ffab9
Parents: f4e1aa0
Author: Haohui Mai <wh...@apache.org>
Authored: Tue Aug 8 16:27:23 2017 -0700
Committer: Haohui Mai <wh...@apache.org>
Committed: Tue Aug 8 16:33:18 2017 -0700

----------------------------------------------------------------------
 .../org/apache/hadoop/fs/FsUrlConnection.java   | 10 ++++
 .../hadoop/fs/FsUrlStreamHandlerFactory.java    | 26 ++++++++++-
 .../apache/hadoop/fs/TestUrlStreamHandler.java  | 48 +++++++++++++++-----
 3 files changed, 72 insertions(+), 12 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1db4788b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FsUrlConnection.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FsUrlConnection.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FsUrlConnection.java
index 90e75b0..03c7aed 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FsUrlConnection.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FsUrlConnection.java
@@ -23,6 +23,10 @@ import java.net.URISyntaxException;
 import java.net.URL;
 import java.net.URLConnection;
 
+import com.google.common.base.Preconditions;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configuration;
@@ -33,6 +37,8 @@ import org.apache.hadoop.conf.Configuration;
 @InterfaceAudience.Private
 @InterfaceStability.Unstable
 class FsUrlConnection extends URLConnection {
+  private static final Logger LOG =
+      LoggerFactory.getLogger(FsUrlConnection.class);
 
   private Configuration conf;
 
@@ -40,12 +46,16 @@ class FsUrlConnection extends URLConnection {
 
   FsUrlConnection(Configuration conf, URL url) {
     super(url);
+    Preconditions.checkArgument(conf != null, "null conf argument");
+    Preconditions.checkArgument(url != null, "null url argument");
     this.conf = conf;
   }
 
   @Override
   public void connect() throws IOException {
+    Preconditions.checkState(is == null, "Already connected");
     try {
+      LOG.debug("Connecting to {}", url);
       FileSystem fs = FileSystem.get(url.toURI(), conf);
       is = fs.open(new Path(url.getPath()));
     } catch (URISyntaxException e) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1db4788b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FsUrlStreamHandlerFactory.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FsUrlStreamHandlerFactory.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FsUrlStreamHandlerFactory.java
index 91a527d..751b955 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FsUrlStreamHandlerFactory.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FsUrlStreamHandlerFactory.java
@@ -22,6 +22,9 @@ import java.net.URLStreamHandlerFactory;
 import java.util.Map;
 import java.util.concurrent.ConcurrentHashMap;
 
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configuration;
@@ -41,6 +44,18 @@ import org.apache.hadoop.conf.Configuration;
 public class FsUrlStreamHandlerFactory implements
     URLStreamHandlerFactory {
 
+  private static final Logger LOG =
+      LoggerFactory.getLogger(FsUrlStreamHandlerFactory.class);
+
+  /**
+   * These are the protocols with MUST NOT be exported, as doing so
+   * would conflict with the standard URL handlers registered by
+   * the JVM. Many things will break.
+   */
+  public static final String[] UNEXPORTED_PROTOCOLS = {
+      "http", "https"
+  };
+
   // The configuration holds supported FS implementation class names.
   private Configuration conf;
 
@@ -64,14 +79,20 @@ public class FsUrlStreamHandlerFactory implements
       throw new RuntimeException(io);
     }
     this.handler = new FsUrlStreamHandler(this.conf);
+    for (String protocol : UNEXPORTED_PROTOCOLS) {
+      protocols.put(protocol, false);
+    }
   }
 
   @Override
   public java.net.URLStreamHandler createURLStreamHandler(String protocol) {
+    LOG.debug("Creating handler for protocol {}", protocol);
     if (!protocols.containsKey(protocol)) {
       boolean known = true;
       try {
-        FileSystem.getFileSystemClass(protocol, conf);
+        Class<? extends FileSystem> impl
+            = FileSystem.getFileSystemClass(protocol, conf);
+        LOG.debug("Found implementation of {}: {}", protocol, impl);
       }
       catch (IOException ex) {
         known = false;
@@ -79,9 +100,12 @@ public class FsUrlStreamHandlerFactory implements
       protocols.put(protocol, known);
     }
     if (protocols.get(protocol)) {
+      LOG.debug("Using handler for protocol {}", protocol);
       return handler;
     } else {
       // FileSystem does not know the protocol, let the VM handle this
+      LOG.debug("Unknown protocol {}, delegating to default implementation",
+          protocol);
       return null;
     }
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1db4788b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestUrlStreamHandler.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestUrlStreamHandler.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestUrlStreamHandler.java
index 6fc97a2..5a04f67 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestUrlStreamHandler.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestUrlStreamHandler.java
@@ -19,6 +19,7 @@ package org.apache.hadoop.fs;
 
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertNull;
 
 import java.io.File;
 import java.io.IOException;
@@ -32,6 +33,8 @@ import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hdfs.HdfsConfiguration;
 import org.apache.hadoop.hdfs.MiniDFSCluster;
 import org.apache.hadoop.test.PathUtils;
+
+import org.junit.BeforeClass;
 import org.junit.Test;
 
 /**
@@ -39,8 +42,22 @@ import org.junit.Test;
  */
 public class TestUrlStreamHandler {
 
-  private static final File TEST_ROOT_DIR = PathUtils.getTestDir(TestUrlStreamHandler.class);
-    
+  private static final File TEST_ROOT_DIR =
+      PathUtils.getTestDir(TestUrlStreamHandler.class);
+
+  private static final FsUrlStreamHandlerFactory HANDLER_FACTORY
+      = new FsUrlStreamHandlerFactory();
+
+  @BeforeClass
+  public static void setupHandler() {
+
+    // Setup our own factory
+    // setURLStreamHandlerFactor is can be set at most once in the JVM
+    // the new URLStreamHandler is valid for all tests cases
+    // in TestStreamHandler
+    URL.setURLStreamHandlerFactory(HANDLER_FACTORY);
+  }
+
   /**
    * Test opening and reading from an InputStream through a hdfs:// URL.
    * <p>
@@ -55,15 +72,6 @@ public class TestUrlStreamHandler {
     Configuration conf = new HdfsConfiguration();
     MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf).numDataNodes(2).build();
     FileSystem fs = cluster.getFileSystem();
-
-    // Setup our own factory
-    // setURLSteramHandlerFactor is can be set at most once in the JVM
-    // the new URLStreamHandler is valid for all tests cases 
-    // in TestStreamHandler
-    FsUrlStreamHandlerFactory factory =
-        new org.apache.hadoop.fs.FsUrlStreamHandlerFactory();
-    java.net.URL.setURLStreamHandlerFactory(factory);
-
     Path filePath = new Path("/thefile");
 
     try {
@@ -156,4 +164,22 @@ public class TestUrlStreamHandler {
 
   }
 
+  @Test
+  public void testHttpDefaultHandler() throws Throwable {
+    assertNull("Handler for HTTP is the Hadoop one",
+        HANDLER_FACTORY.createURLStreamHandler("http"));
+  }
+
+  @Test
+  public void testHttpsDefaultHandler() throws Throwable {
+    assertNull("Handler for HTTPS is the Hadoop one",
+        HANDLER_FACTORY.createURLStreamHandler("https"));
+  }
+
+  @Test
+  public void testUnknownProtocol() throws Throwable {
+    assertNull("Unknown protocols are not handled",
+        HANDLER_FACTORY.createURLStreamHandler("gopher"));
+  }
+
 }


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[34/50] [abbrv] hadoop git commit: HDFS-12157. Do fsyncDirectory(..) outside of FSDataset lock. Contributed by inayakumar B.

Posted by wa...@apache.org.
HDFS-12157. Do fsyncDirectory(..) outside of FSDataset lock. Contributed by inayakumar B.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/69afa26f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/69afa26f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/69afa26f

Branch: refs/heads/YARN-5881
Commit: 69afa26f19adad4c630a307c274130eb8b697141
Parents: 1a18d5e
Author: Kihwal Lee <ki...@apache.org>
Authored: Wed Aug 9 09:03:51 2017 -0500
Committer: Kihwal Lee <ki...@apache.org>
Committed: Wed Aug 9 09:03:51 2017 -0500

----------------------------------------------------------------------
 .../datanode/fsdataset/impl/FsDatasetImpl.java  | 46 ++++++++++----------
 1 file changed, 24 insertions(+), 22 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/69afa26f/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
index 53e2fc6..16df709 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
@@ -991,8 +991,7 @@ class FsDatasetImpl implements FsDatasetSpi<FsVolumeImpl> {
         replicaInfo, smallBufferSize, conf);
 
     // Finalize the copied files
-    newReplicaInfo = finalizeReplica(block.getBlockPoolId(), newReplicaInfo,
-        false);
+    newReplicaInfo = finalizeReplica(block.getBlockPoolId(), newReplicaInfo);
     try (AutoCloseableLock lock = datasetLock.acquire()) {
       // Increment numBlocks here as this block moved without knowing to BPS
       FsVolumeImpl volume = (FsVolumeImpl) newReplicaInfo.getVolume();
@@ -1295,7 +1294,7 @@ class FsDatasetImpl implements FsDatasetSpi<FsVolumeImpl> {
           replicaInfo.bumpReplicaGS(newGS);
           // finalize the replica if RBW
           if (replicaInfo.getState() == ReplicaState.RBW) {
-            finalizeReplica(b.getBlockPoolId(), replicaInfo, false);
+            finalizeReplica(b.getBlockPoolId(), replicaInfo);
           }
           return replicaInfo;
         }
@@ -1625,23 +1624,39 @@ class FsDatasetImpl implements FsDatasetSpi<FsVolumeImpl> {
   @Override // FsDatasetSpi
   public void finalizeBlock(ExtendedBlock b, boolean fsyncDir)
       throws IOException {
+    ReplicaInfo replicaInfo = null;
+    ReplicaInfo finalizedReplicaInfo = null;
     try (AutoCloseableLock lock = datasetLock.acquire()) {
       if (Thread.interrupted()) {
         // Don't allow data modifications from interrupted threads
         throw new IOException("Cannot finalize block from Interrupted Thread");
       }
-      ReplicaInfo replicaInfo = getReplicaInfo(b);
+      replicaInfo = getReplicaInfo(b);
       if (replicaInfo.getState() == ReplicaState.FINALIZED) {
         // this is legal, when recovery happens on a file that has
         // been opened for append but never modified
         return;
       }
-      finalizeReplica(b.getBlockPoolId(), replicaInfo, fsyncDir);
+      finalizedReplicaInfo = finalizeReplica(b.getBlockPoolId(), replicaInfo);
+    }
+    /*
+     * Sync the directory after rename from tmp/rbw to Finalized if
+     * configured. Though rename should be atomic operation, sync on both
+     * dest and src directories are done because IOUtils.fsync() calls
+     * directory's channel sync, not the journal itself.
+     */
+    if (fsyncDir && finalizedReplicaInfo instanceof FinalizedReplica
+        && replicaInfo instanceof LocalReplica) {
+      FinalizedReplica finalizedReplica =
+          (FinalizedReplica) finalizedReplicaInfo;
+      finalizedReplica.fsyncDirectory();
+      LocalReplica localReplica = (LocalReplica) replicaInfo;
+      localReplica.fsyncDirectory();
     }
   }
 
-  private ReplicaInfo finalizeReplica(String bpid,
-      ReplicaInfo replicaInfo, boolean fsyncDir) throws IOException {
+  private ReplicaInfo finalizeReplica(String bpid, ReplicaInfo replicaInfo)
+      throws IOException {
     try (AutoCloseableLock lock = datasetLock.acquire()) {
       ReplicaInfo newReplicaInfo = null;
       if (replicaInfo.getState() == ReplicaState.RUR &&
@@ -1656,19 +1671,6 @@ class FsDatasetImpl implements FsDatasetSpi<FsVolumeImpl> {
 
         newReplicaInfo = v.addFinalizedBlock(
             bpid, replicaInfo, replicaInfo, replicaInfo.getBytesReserved());
-        /*
-         * Sync the directory after rename from tmp/rbw to Finalized if
-         * configured. Though rename should be atomic operation, sync on both
-         * dest and src directories are done because IOUtils.fsync() calls
-         * directory's channel sync, not the journal itself.
-         */
-        if (fsyncDir && newReplicaInfo instanceof FinalizedReplica
-            && replicaInfo instanceof LocalReplica) {
-          FinalizedReplica finalizedReplica = (FinalizedReplica) newReplicaInfo;
-          finalizedReplica.fsyncDirectory();
-          LocalReplica localReplica = (LocalReplica) replicaInfo;
-          localReplica.fsyncDirectory();
-        }
         if (v.isTransientStorage()) {
           releaseLockedMemory(
               replicaInfo.getOriginalBytesReserved()
@@ -2634,11 +2636,11 @@ class FsDatasetImpl implements FsDatasetSpi<FsVolumeImpl> {
 
         newReplicaInfo.setNumBytes(newlength);
         volumeMap.add(bpid, newReplicaInfo.getReplicaInfo());
-        finalizeReplica(bpid, newReplicaInfo.getReplicaInfo(), false);
+        finalizeReplica(bpid, newReplicaInfo.getReplicaInfo());
       }
     }
     // finalize the block
-    return finalizeReplica(bpid, rur, false);
+    return finalizeReplica(bpid, rur);
   }
 
   @Override // FsDatasetSpi


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[06/50] [abbrv] hadoop git commit: HADOOP-14722. Azure: BlockBlobInputStream position incorrect after seek. Contributed by Thomas Marquardt

Posted by wa...@apache.org.
HADOOP-14722. Azure: BlockBlobInputStream position incorrect after seek.
Contributed by Thomas Marquardt


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d91b7a84
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d91b7a84
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d91b7a84

Branch: refs/heads/YARN-5881
Commit: d91b7a8451489f97bdde928cea774764155cfe03
Parents: 024c3ec
Author: Steve Loughran <st...@apache.org>
Authored: Sun Aug 6 20:19:23 2017 +0100
Committer: Steve Loughran <st...@apache.org>
Committed: Sun Aug 6 20:19:23 2017 +0100

----------------------------------------------------------------------
 .../hadoop/fs/azure/BlockBlobInputStream.java   | 91 +++++++++++++++-----
 .../fs/azure/TestBlockBlobInputStream.java      | 85 ++++++++++++++++--
 2 files changed, 150 insertions(+), 26 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d91b7a84/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/BlockBlobInputStream.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/BlockBlobInputStream.java b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/BlockBlobInputStream.java
index 5542415..c37b2be 100644
--- a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/BlockBlobInputStream.java
+++ b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/BlockBlobInputStream.java
@@ -43,11 +43,16 @@ final class BlockBlobInputStream extends InputStream implements Seekable {
   private InputStream blobInputStream = null;
   private int minimumReadSizeInBytes = 0;
   private long streamPositionAfterLastRead = -1;
+  // position of next network read within stream
   private long streamPosition = 0;
+  // length of stream
   private long streamLength = 0;
   private boolean closed = false;
+  // internal buffer, re-used for performance optimization
   private byte[] streamBuffer;
+  // zero-based offset within streamBuffer of current read position
   private int streamBufferPosition;
+  // length of data written to streamBuffer, streamBuffer may be larger
   private int streamBufferLength;
 
   /**
@@ -82,6 +87,16 @@ final class BlockBlobInputStream extends InputStream implements Seekable {
   }
 
   /**
+   * Reset the internal stream buffer but do not release the memory.
+   * The buffer can be reused to avoid frequent memory allocations of
+   * a large buffer.
+   */
+  private void resetStreamBuffer() {
+    streamBufferPosition = 0;
+    streamBufferLength = 0;
+  }
+
+  /**
    * Gets the read position of the stream.
    * @return the zero-based byte offset of the read position.
    * @throws IOException IO failure
@@ -89,7 +104,9 @@ final class BlockBlobInputStream extends InputStream implements Seekable {
   @Override
   public synchronized long getPos() throws IOException {
     checkState();
-    return streamPosition;
+    return (streamBuffer != null)
+        ? streamPosition - streamBufferLength + streamBufferPosition
+        : streamPosition;
   }
 
   /**
@@ -107,21 +124,39 @@ final class BlockBlobInputStream extends InputStream implements Seekable {
       throw new EOFException(
           FSExceptionMessages.CANNOT_SEEK_PAST_EOF + " " + pos);
     }
-    if (pos == getPos()) {
+
+    // calculate offset between the target and current position in the stream
+    long offset = pos - getPos();
+
+    if (offset == 0) {
       // no=op, no state change
       return;
     }
 
+    if (offset > 0) {
+      // forward seek, data can be skipped as an optimization
+      if (skip(offset) != offset) {
+        throw new EOFException(FSExceptionMessages.EOF_IN_READ_FULLY);
+      }
+      return;
+    }
+
+    // reverse seek, offset is negative
     if (streamBuffer != null) {
-      long offset = streamPosition - pos;
-      if (offset > 0 && offset < streamBufferLength) {
-        streamBufferPosition = streamBufferLength - (int) offset;
+      if (streamBufferPosition + offset >= 0) {
+        // target position is inside the stream buffer,
+        // only need to move backwards within the stream buffer
+        streamBufferPosition += offset;
       } else {
-        streamBufferPosition = streamBufferLength;
+        // target position is outside the stream buffer,
+        // need to reset stream buffer and move position for next network read
+        resetStreamBuffer();
+        streamPosition = pos;
       }
+    } else {
+      streamPosition = pos;
     }
 
-    streamPosition = pos;
     // close BlobInputStream after seek is invoked because BlobInputStream
     // does not support seek
     closeBlobInputStream();
@@ -189,8 +224,7 @@ final class BlockBlobInputStream extends InputStream implements Seekable {
         streamBuffer = new byte[(int) Math.min(minimumReadSizeInBytes,
             streamLength)];
       }
-      streamBufferPosition = 0;
-      streamBufferLength = 0;
+      resetStreamBuffer();
       outputStream = new MemoryOutputStream(streamBuffer, streamBufferPosition,
           streamBuffer.length);
       needToCopy = true;
@@ -295,27 +329,44 @@ final class BlockBlobInputStream extends InputStream implements Seekable {
    * @param n the number of bytes to be skipped.
    * @return the actual number of bytes skipped.
    * @throws IOException IO failure
+   * @throws IndexOutOfBoundsException if n is negative or if the sum of n
+   * and the current value of getPos() is greater than the length of the stream.
    */
   @Override
   public synchronized long skip(long n) throws IOException {
     checkState();
 
     if (blobInputStream != null) {
-      return blobInputStream.skip(n);
-    } else {
-      if (n < 0 || streamPosition + n > streamLength) {
-        throw new IndexOutOfBoundsException("skip range");
-      }
+      // blobInput stream is open; delegate the work to it
+      long skipped = blobInputStream.skip(n);
+      // update position to the actual skip value
+      streamPosition += skipped;
+      return skipped;
+    }
 
-      if (streamBuffer != null) {
-        streamBufferPosition = (n < streamBufferLength - streamBufferPosition)
-            ? streamBufferPosition + (int) n
-            : streamBufferLength;
-      }
+    // no blob stream; implement the skip logic directly
+    if (n < 0 || n > streamLength - getPos()) {
+      throw new IndexOutOfBoundsException("skip range");
+    }
 
+    if (streamBuffer != null) {
+      // there's a buffer, so seek with it
+      if (n < streamBufferLength - streamBufferPosition) {
+        // new range is in the buffer, so just update the buffer position
+        // skip within the buffer.
+        streamBufferPosition += (int) n;
+      } else {
+        // skip is out of range, so move position to ne value and reset
+        // the buffer ready for the next read()
+        streamPosition = getPos() + n;
+        resetStreamBuffer();
+      }
+    } else {
+      // no stream buffer; increment the stream position ready for
+      // the next triggered connection & read
       streamPosition += n;
-      return n;
     }
+    return n;
   }
 
   /**

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d91b7a84/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/TestBlockBlobInputStream.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/TestBlockBlobInputStream.java b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/TestBlockBlobInputStream.java
index 2453584..0ae4012 100644
--- a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/TestBlockBlobInputStream.java
+++ b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/TestBlockBlobInputStream.java
@@ -155,7 +155,7 @@ public class TestBlockBlobInputStream extends AbstractWasbTestBase {
     }
 
     LOG.info("Creating test file {} of size: {}", TEST_FILE_PATH,
-        TEST_FILE_SIZE );
+        TEST_FILE_SIZE);
     ContractTestUtils.NanoTimer timer = new ContractTestUtils.NanoTimer();
 
     try(FSDataOutputStream outputStream = fs.create(TEST_FILE_PATH)) {
@@ -198,7 +198,7 @@ public class TestBlockBlobInputStream extends AbstractWasbTestBase {
   }
 
   @Test
-  public void test_0200_BasicReadTestV2() throws Exception {
+  public void test_0200_BasicReadTest() throws Exception {
     assumeHugeFileExists();
 
     try (
@@ -214,12 +214,12 @@ public class TestBlockBlobInputStream extends AbstractWasbTestBase {
       // v1 forward seek and read a kilobyte into first kilobyte of bufferV1
       inputStreamV1.seek(5 * MEGABYTE);
       int numBytesReadV1 = inputStreamV1.read(bufferV1, 0, KILOBYTE);
-      assertEquals(numBytesReadV1, KILOBYTE);
+      assertEquals(KILOBYTE, numBytesReadV1);
 
       // v2 forward seek and read a kilobyte into first kilobyte of bufferV2
       inputStreamV2.seek(5 * MEGABYTE);
       int numBytesReadV2 = inputStreamV2.read(bufferV2, 0, KILOBYTE);
-      assertEquals(numBytesReadV2, KILOBYTE);
+      assertEquals(KILOBYTE, numBytesReadV2);
 
       assertArrayEquals(bufferV1, bufferV2);
 
@@ -229,17 +229,90 @@ public class TestBlockBlobInputStream extends AbstractWasbTestBase {
       // v1 reverse seek and read a megabyte into last megabyte of bufferV1
       inputStreamV1.seek(3 * MEGABYTE);
       numBytesReadV1 = inputStreamV1.read(bufferV1, offset, len);
-      assertEquals(numBytesReadV1, len);
+      assertEquals(len, numBytesReadV1);
 
       // v2 reverse seek and read a megabyte into last megabyte of bufferV2
       inputStreamV2.seek(3 * MEGABYTE);
       numBytesReadV2 = inputStreamV2.read(bufferV2, offset, len);
-      assertEquals(numBytesReadV2, len);
+      assertEquals(len, numBytesReadV2);
 
       assertArrayEquals(bufferV1, bufferV2);
     }
   }
 
+  @Test
+  public void test_0201_RandomReadTest() throws Exception {
+    assumeHugeFileExists();
+
+    try (
+        FSDataInputStream inputStreamV1
+            = accountUsingInputStreamV1.getFileSystem().open(TEST_FILE_PATH);
+
+        FSDataInputStream inputStreamV2
+            = accountUsingInputStreamV2.getFileSystem().open(TEST_FILE_PATH);
+    ) {
+      final int bufferSize = 4 * KILOBYTE;
+      byte[] bufferV1 = new byte[bufferSize];
+      byte[] bufferV2 = new byte[bufferV1.length];
+
+      verifyConsistentReads(inputStreamV1, inputStreamV2, bufferV1, bufferV2);
+
+      inputStreamV1.seek(0);
+      inputStreamV2.seek(0);
+
+      verifyConsistentReads(inputStreamV1, inputStreamV2, bufferV1, bufferV2);
+
+      verifyConsistentReads(inputStreamV1, inputStreamV2, bufferV1, bufferV2);
+
+      int seekPosition = 2 * KILOBYTE;
+      inputStreamV1.seek(seekPosition);
+      inputStreamV2.seek(seekPosition);
+
+      inputStreamV1.seek(0);
+      inputStreamV2.seek(0);
+
+      verifyConsistentReads(inputStreamV1, inputStreamV2, bufferV1, bufferV2);
+
+      verifyConsistentReads(inputStreamV1, inputStreamV2, bufferV1, bufferV2);
+
+      verifyConsistentReads(inputStreamV1, inputStreamV2, bufferV1, bufferV2);
+
+      seekPosition = 5 * KILOBYTE;
+      inputStreamV1.seek(seekPosition);
+      inputStreamV2.seek(seekPosition);
+
+      verifyConsistentReads(inputStreamV1, inputStreamV2, bufferV1, bufferV2);
+
+      seekPosition = 10 * KILOBYTE;
+      inputStreamV1.seek(seekPosition);
+      inputStreamV2.seek(seekPosition);
+
+      verifyConsistentReads(inputStreamV1, inputStreamV2, bufferV1, bufferV2);
+
+      verifyConsistentReads(inputStreamV1, inputStreamV2, bufferV1, bufferV2);
+
+      seekPosition = 4100 * KILOBYTE;
+      inputStreamV1.seek(seekPosition);
+      inputStreamV2.seek(seekPosition);
+
+      verifyConsistentReads(inputStreamV1, inputStreamV2, bufferV1, bufferV2);
+    }
+  }
+
+  private void verifyConsistentReads(FSDataInputStream inputStreamV1,
+      FSDataInputStream inputStreamV2,
+      byte[] bufferV1,
+      byte[] bufferV2) throws IOException {
+    int size = bufferV1.length;
+    final int numBytesReadV1 = inputStreamV1.read(bufferV1, 0, size);
+    assertEquals("Bytes read from V1 stream", size, numBytesReadV1);
+
+    final int numBytesReadV2 = inputStreamV2.read(bufferV2, 0, size);
+    assertEquals("Bytes read from V2 stream", size, numBytesReadV2);
+
+    assertArrayEquals("Mismatch in read data", bufferV1, bufferV2);
+  }
+
   /**
    * Validates the implementation of InputStream.markSupported.
    * @throws IOException


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[08/50] [abbrv] hadoop git commit: HDFS-12198. Document missing namenode metrics that were added recently. Contributed by Yiqun Lin.

Posted by wa...@apache.org.
HDFS-12198. Document missing namenode metrics that were added recently. Contributed by Yiqun Lin.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a4eb7016
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a4eb7016
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a4eb7016

Branch: refs/heads/YARN-5881
Commit: a4eb7016cb20dfbc656b831c603136785e62fddc
Parents: 46b7054
Author: Akira Ajisaka <aa...@apache.org>
Authored: Mon Aug 7 18:47:33 2017 +0900
Committer: Akira Ajisaka <aa...@apache.org>
Committed: Mon Aug 7 18:47:33 2017 +0900

----------------------------------------------------------------------
 .../hadoop-common/src/site/markdown/Metrics.md              | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a4eb7016/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md b/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md
index 852a1e9..4543fac 100644
--- a/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md
+++ b/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md
@@ -145,6 +145,9 @@ Each metrics record contains tags such as ProcessName, SessionId, and Hostname a
 | `CreateSymlinkOps` | Total number of createSymlink operations |
 | `GetLinkTargetOps` | Total number of getLinkTarget operations |
 | `FilesInGetListingOps` | Total number of files and directories listed by directory listing operations |
+| `SuccessfulReReplications` | Total number of successful block re-replications |
+| `NumTimesReReplicationNotScheduled` | Total number of times that failed to schedule a block re-replication |
+| `TimeoutReReplications` | Total number of timed out block re-replications |
 | `AllowSnapshotOps` | Total number of allowSnapshot operations |
 | `DisallowSnapshotOps` | Total number of disallowSnapshot operations |
 | `CreateSnapshotOps` | Total number of createSnapshot operations |
@@ -157,8 +160,8 @@ Each metrics record contains tags such as ProcessName, SessionId, and Hostname a
 | `SyncsNumOps` | Total number of Journal syncs |
 | `SyncsAvgTime` | Average time of Journal syncs in milliseconds |
 | `TransactionsBatchedInSync` | Total number of Journal transactions batched in sync |
-| `BlockReportNumOps` | Total number of processing block reports from DataNode |
-| `BlockReportAvgTime` | Average time of processing block reports in milliseconds |
+| `StorageBlockReportNumOps` | Total number of processing block reports from individual storages in DataNode |
+| `StorageBlockReportAvgTime` | Average time of processing block reports in milliseconds |
 | `CacheReportNumOps` | Total number of processing cache reports from DataNode |
 | `CacheReportAvgTime` | Average time of processing cache reports in milliseconds |
 | `SafeModeTime` | The interval between FSNameSystem starts and the last time safemode leaves in milliseconds.  (sometimes not equal to the time in SafeMode, see [HDFS-5156](https://issues.apache.org/jira/browse/HDFS-5156)) |
@@ -176,6 +179,8 @@ Each metrics record contains tags such as ProcessName, SessionId, and Hostname a
 | `GenerateEDEKTimeAvgTime` | Average time of generating EDEK in milliseconds |
 | `WarmUpEDEKTimeNumOps` | Total number of warming up EDEK |
 | `WarmUpEDEKTimeAvgTime` | Average time of warming up EDEK in milliseconds |
+| `ResourceCheckTime`*num*`s(50|75|90|95|99)thPercentileLatency` | The 50/75/90/95/99th percentile of NameNode resource check latency in milliseconds. Percentile measurement is off by default, by watching no intervals. The intervals are specified by `dfs.metrics.percentiles.intervals`. |
+| `StorageBlockReport`*num*`s(50|75|90|95|99)thPercentileLatency` | The 50/75/90/95/99th percentile of storage block report latency in milliseconds. Percentile measurement is off by default, by watching no intervals. The intervals are specified by `dfs.metrics.percentiles.intervals`. |
 
 FSNamesystem
 ------------


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[15/50] [abbrv] hadoop git commit: YARN-6955. Handle concurrent register AM requests in FederationInterceptor. (Botong Huang via Subru).

Posted by wa...@apache.org.
YARN-6955. Handle concurrent register AM requests in FederationInterceptor. (Botong Huang via Subru).


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c61f2c41
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c61f2c41
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c61f2c41

Branch: refs/heads/YARN-5881
Commit: c61f2c419830e40bb47fb2b1fe1f7d6109ed29a9
Parents: bc20680
Author: Subru Krishnan <su...@apache.org>
Authored: Mon Aug 7 16:58:29 2017 -0700
Committer: Subru Krishnan <su...@apache.org>
Committed: Mon Aug 7 16:58:29 2017 -0700

----------------------------------------------------------------------
 .../dev-support/findbugs-exclude.xml            |  4 +-
 .../yarn/server/MockResourceManagerFacade.java  | 18 ++--
 .../amrmproxy/FederationInterceptor.java        | 43 ++++------
 .../amrmproxy/TestFederationInterceptor.java    | 88 ++++++++++++++++++--
 4 files changed, 110 insertions(+), 43 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c61f2c41/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml b/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
index 034f03c..6825a36 100644
--- a/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
+++ b/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
@@ -594,11 +594,9 @@
     <Bug pattern="UL_UNRELEASED_LOCK_EXCEPTION_PATH" />
   </Match>
 
-  <!-- Ignore false alert for RCN_REDUNDANT_NULLCHECK_OF_NULL_VALUE -->
   <Match>
     <Class name="org.apache.hadoop.yarn.server.nodemanager.amrmproxy.FederationInterceptor" />
-    <Method name="registerApplicationMaster" />
-    <Bug pattern="RCN_REDUNDANT_NULLCHECK_OF_NULL_VALUE" />
+    <Bug pattern="IS2_INCONSISTENT_SYNC" />
   </Match>
 
 </FindBugsFilter>

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c61f2c41/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/MockResourceManagerFacade.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/MockResourceManagerFacade.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/MockResourceManagerFacade.java
index 68c55ac..e33d7e1 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/MockResourceManagerFacade.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/MockResourceManagerFacade.java
@@ -246,6 +246,16 @@ public class MockResourceManagerFacade implements ApplicationClientProtocol,
 
     shouldReRegisterNext = false;
 
+    synchronized (applicationContainerIdMap) {
+      if (applicationContainerIdMap.containsKey(amrmToken)) {
+        throw new InvalidApplicationMasterRequestException(
+            AMRMClientUtils.APP_ALREADY_REGISTERED_MESSAGE);
+      }
+      // Keep track of the containers that are returned to this application
+      applicationContainerIdMap.put(amrmToken, new ArrayList<ContainerId>());
+    }
+
+    // Make sure we wait for certain test cases last in the method
     synchronized (syncObj) {
       syncObj.notifyAll();
       // We reuse the port number to indicate whether the unit test want us to
@@ -261,14 +271,6 @@ public class MockResourceManagerFacade implements ApplicationClientProtocol,
       }
     }
 
-    synchronized (applicationContainerIdMap) {
-      if (applicationContainerIdMap.containsKey(amrmToken)) {
-        throw new InvalidApplicationMasterRequestException(
-            AMRMClientUtils.APP_ALREADY_REGISTERED_MESSAGE);
-      }
-      // Keep track of the containers that are returned to this application
-      applicationContainerIdMap.put(amrmToken, new ArrayList<ContainerId>());
-    }
     return RegisterApplicationMasterResponse.newInstance(null, null, null, null,
         null, request.getHost(), null);
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c61f2c41/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/amrmproxy/FederationInterceptor.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/amrmproxy/FederationInterceptor.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/amrmproxy/FederationInterceptor.java
index ffe47f4..28724aa 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/amrmproxy/FederationInterceptor.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/amrmproxy/FederationInterceptor.java
@@ -208,22 +208,25 @@ public class FederationInterceptor extends AbstractRequestInterceptor {
    * requests from AM because of timeout between AM and AMRMProxy, which is
    * shorter than the timeout + failOver between FederationInterceptor
    * (AMRMProxy) and RM.
+   *
+   * For the same reason, this method needs to be synchronized.
    */
   @Override
-  public RegisterApplicationMasterResponse registerApplicationMaster(
-      RegisterApplicationMasterRequest request)
-      throws YarnException, IOException {
+  public synchronized RegisterApplicationMasterResponse
+      registerApplicationMaster(RegisterApplicationMasterRequest request)
+          throws YarnException, IOException {
     // If AM is calling with a different request, complain
-    if (this.amRegistrationRequest != null
-        && !this.amRegistrationRequest.equals(request)) {
-      throw new YarnException("A different request body recieved. AM should"
-          + " not call registerApplicationMaster with different request body");
+    if (this.amRegistrationRequest != null) {
+      if (!this.amRegistrationRequest.equals(request)) {
+        throw new YarnException("AM should not call "
+            + "registerApplicationMaster with a different request body");
+      }
+    } else {
+      // Save the registration request. This will be used for registering with
+      // secondary sub-clusters using UAMs, as well as re-register later
+      this.amRegistrationRequest = request;
     }
 
-    // Save the registration request. This will be used for registering with
-    // secondary sub-clusters using UAMs, as well as re-register later
-    this.amRegistrationRequest = request;
-
     /*
      * Present to AM as if we are the RM that never fails over. When actual RM
      * fails over, we always re-register automatically.
@@ -245,22 +248,8 @@ public class FederationInterceptor extends AbstractRequestInterceptor {
      * is running and will breaks the elasticity feature. The registration with
      * the other sub-cluster RM will be done lazily as needed later.
      */
-    try {
-      this.amRegistrationResponse =
-          this.homeRM.registerApplicationMaster(request);
-    } catch (InvalidApplicationMasterRequestException e) {
-      if (e.getMessage()
-          .contains(AMRMClientUtils.APP_ALREADY_REGISTERED_MESSAGE)) {
-        // Some other register thread might have succeeded in the meantime
-        if (this.amRegistrationResponse != null) {
-          LOG.info("Other concurrent thread registered successfully, "
-              + "simply return the same success register response");
-          return this.amRegistrationResponse;
-        }
-      }
-      // This is a real issue, throw back to AM
-      throw e;
-    }
+    this.amRegistrationResponse =
+        this.homeRM.registerApplicationMaster(request);
 
     // the queue this application belongs will be used for getting
     // AMRMProxy policy from state store.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c61f2c41/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/amrmproxy/TestFederationInterceptor.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/amrmproxy/TestFederationInterceptor.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/amrmproxy/TestFederationInterceptor.java
index 4e15323..34b0741 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/amrmproxy/TestFederationInterceptor.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/amrmproxy/TestFederationInterceptor.java
@@ -21,6 +21,11 @@ package org.apache.hadoop.yarn.server.nodemanager.amrmproxy;
 import java.io.IOException;
 import java.util.ArrayList;
 import java.util.List;
+import java.util.concurrent.Callable;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.ExecutorCompletionService;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
 
 import org.apache.hadoop.yarn.api.protocolrecords.AllocateRequest;
 import org.apache.hadoop.yarn.api.protocolrecords.AllocateResponse;
@@ -36,6 +41,7 @@ import org.apache.hadoop.yarn.api.records.ResourceRequest;
 import org.apache.hadoop.yarn.conf.YarnConfiguration;
 import org.apache.hadoop.yarn.exceptions.InvalidApplicationMasterRequestException;
 import org.apache.hadoop.yarn.exceptions.YarnException;
+import org.apache.hadoop.yarn.server.MockResourceManagerFacade;
 import org.apache.hadoop.yarn.server.federation.policies.manager.UniformBroadcastPolicyManager;
 import org.apache.hadoop.yarn.server.federation.store.impl.MemoryFederationStateStore;
 import org.apache.hadoop.yarn.server.federation.store.records.SubClusterDeregisterRequest;
@@ -234,7 +240,7 @@ public class TestFederationInterceptor extends BaseAMRMProxyTest {
     RegisterApplicationMasterRequest registerReq =
         Records.newRecord(RegisterApplicationMasterRequest.class);
     registerReq.setHost(Integer.toString(testAppId));
-    registerReq.setRpcPort(testAppId);
+    registerReq.setRpcPort(0);
     registerReq.setTrackingUrl("");
 
     RegisterApplicationMasterResponse registerResponse =
@@ -298,7 +304,7 @@ public class TestFederationInterceptor extends BaseAMRMProxyTest {
     RegisterApplicationMasterRequest registerReq =
         Records.newRecord(RegisterApplicationMasterRequest.class);
     registerReq.setHost(Integer.toString(testAppId));
-    registerReq.setRpcPort(testAppId);
+    registerReq.setRpcPort(0);
     registerReq.setTrackingUrl("");
 
     RegisterApplicationMasterResponse registerResponse =
@@ -338,6 +344,78 @@ public class TestFederationInterceptor extends BaseAMRMProxyTest {
     Assert.assertEquals(true, finshResponse.getIsUnregistered());
   }
 
+  /*
+   * Test concurrent register threads. This is possible because the timeout
+   * between AM and AMRMProxy is shorter than the timeout + failOver between
+   * FederationInterceptor (AMRMProxy) and RM. When first call is blocked due to
+   * RM failover and AM timeout, it will call us resulting in a second register
+   * thread.
+   */
+  @Test(timeout = 5000)
+  public void testConcurrentRegister()
+      throws InterruptedException, ExecutionException {
+    ExecutorService threadpool = Executors.newCachedThreadPool();
+    ExecutorCompletionService<RegisterApplicationMasterResponse> compSvc =
+        new ExecutorCompletionService<>(threadpool);
+
+    Object syncObj = MockResourceManagerFacade.getSyncObj();
+
+    // Two register threads
+    synchronized (syncObj) {
+      // Make sure first thread will block within RM, before the second thread
+      // starts
+      LOG.info("Starting first register thread");
+      compSvc.submit(new ConcurrentRegisterAMCallable());
+
+      try {
+        LOG.info("Test main starts waiting for the first thread to block");
+        syncObj.wait();
+        LOG.info("Test main wait finished");
+      } catch (Exception e) {
+        LOG.info("Test main wait interrupted", e);
+      }
+    }
+
+    // The second thread will get already registered exception from RM.
+    LOG.info("Starting second register thread");
+    compSvc.submit(new ConcurrentRegisterAMCallable());
+
+    // Notify the first register thread to return
+    LOG.info("Let first blocked register thread move on");
+    synchronized (syncObj) {
+      syncObj.notifyAll();
+    }
+
+    // Both thread should return without exception
+    RegisterApplicationMasterResponse response = compSvc.take().get();
+    Assert.assertNotNull(response);
+
+    response = compSvc.take().get();
+    Assert.assertNotNull(response);
+
+    threadpool.shutdown();
+  }
+
+  /**
+   * A callable that calls registerAM to RM with blocking.
+   */
+  public class ConcurrentRegisterAMCallable
+      implements Callable<RegisterApplicationMasterResponse> {
+    @Override
+    public RegisterApplicationMasterResponse call() throws Exception {
+      RegisterApplicationMasterResponse response = null;
+      try {
+        // Use port number 1001 to let mock RM block in the register call
+        response = interceptor.registerApplicationMaster(
+            RegisterApplicationMasterRequest.newInstance(null, 1001, null));
+      } catch (Exception e) {
+        LOG.info("Register thread exception", e);
+        response = null;
+      }
+      return response;
+    }
+  }
+
   @Test
   public void testRequestInterceptorChainCreation() throws Exception {
     RequestInterceptor root =
@@ -381,7 +459,7 @@ public class TestFederationInterceptor extends BaseAMRMProxyTest {
     RegisterApplicationMasterRequest registerReq =
         Records.newRecord(RegisterApplicationMasterRequest.class);
     registerReq.setHost(Integer.toString(testAppId));
-    registerReq.setRpcPort(testAppId);
+    registerReq.setRpcPort(0);
     registerReq.setTrackingUrl("");
 
     for (int i = 0; i < 2; i++) {
@@ -397,7 +475,7 @@ public class TestFederationInterceptor extends BaseAMRMProxyTest {
     RegisterApplicationMasterRequest registerReq =
         Records.newRecord(RegisterApplicationMasterRequest.class);
     registerReq.setHost(Integer.toString(testAppId));
-    registerReq.setRpcPort(testAppId);
+    registerReq.setRpcPort(0);
     registerReq.setTrackingUrl("");
 
     RegisterApplicationMasterResponse registerResponse =
@@ -407,7 +485,7 @@ public class TestFederationInterceptor extends BaseAMRMProxyTest {
     // Register the application second time with a different request obj
     registerReq = Records.newRecord(RegisterApplicationMasterRequest.class);
     registerReq.setHost(Integer.toString(testAppId));
-    registerReq.setRpcPort(testAppId);
+    registerReq.setRpcPort(0);
     registerReq.setTrackingUrl("different");
     try {
       registerResponse = interceptor.registerApplicationMaster(registerReq);


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[02/50] [abbrv] hadoop git commit: HDFS-12251. Add document for StreamCapabilities. (Lei (Eddy) Xu)

Posted by wa...@apache.org.
HDFS-12251. Add document for StreamCapabilities. (Lei (Eddy) Xu)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/fe334178
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/fe334178
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/fe334178

Branch: refs/heads/YARN-5881
Commit: fe3341786a0d61f404127bf21d1afc85b2f21d38
Parents: a6fdeb8
Author: Lei Xu <le...@apache.org>
Authored: Fri Aug 4 11:21:58 2017 -0700
Committer: Lei Xu <le...@apache.org>
Committed: Fri Aug 4 11:21:58 2017 -0700

----------------------------------------------------------------------
 .../src/site/markdown/filesystem/filesystem.md  | 24 ++++++++++++++++++++
 .../src/site/markdown/HDFSErasureCoding.md      | 19 ++++++++++++++++
 2 files changed, 43 insertions(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/fe334178/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md b/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
index b56666c..d7e57ce 100644
--- a/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
+++ b/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
@@ -1210,3 +1210,27 @@ try {
 It is notable that this is *not* done in the Hadoop codebase. This does not imply
 that robust loops are not recommended —more that the concurrency
 problems were not considered during the implementation of these loops.
+
+
+## <a name="StreamCapability"></a> interface `StreamCapabilities`
+
+The `StreamCapabilities` provides a way to programmatically query the
+capabilities that an `OutputStream` supports.
+
+```java
+public interface StreamCapabilities {
+  boolean hasCapability(String capability);
+}
+```
+
+### `boolean hasCapability(capability)`
+
+Return true if the `OutputStream` has the desired capability.
+
+The caller can query the capabilities of a stream using a string value.
+It currently supports to query:
+
+ * `StreamCapabilties.HFLUSH` ("*hflush*"): the capability to flush out the data
+ in client's buffer.
+ * `StreamCapabilities.HSYNC` ("*hsync*"): capability to flush out the data in
+ client's buffer and the disk device.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/fe334178/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSErasureCoding.md
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSErasureCoding.md b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSErasureCoding.md
index 1c0a2de..88293ba 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSErasureCoding.md
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSErasureCoding.md
@@ -199,3 +199,22 @@ Below are the details about each command.
 *  `[-disablePolicy -policy <policyName>]`
 
      Disable an erasure coding policy.
+
+Limitations
+-----------
+
+Certain HDFS file write operations, i.e., `hflush`, `hsync` and `append`,
+are not supported on erasure coded files due to substantial technical
+challenges.
+
+* `append()` on an erasure coded file will throw `IOException`.
+* `hflush()` and `hsync()` on `DFSStripedOutputStream` are no-op. Thus calling
+`hflush()` or `hsync()` on an erasure coded file can not guarantee data
+being persistent.
+
+A client can use [`StreamCapabilities`](../hadoop-common/filesystem/filesystem.html#interface_StreamCapabilities)
+API to query whether a `OutputStream` supports `hflush()` and `hsync()`.
+If the client desires data persistence via `hflush()` and `hsync()`, the current
+remedy is creating such files as regular 3x replication files in a
+non-erasure-coded directory, or using `FSDataOutputStreamBuilder#replicate()`
+API to create 3x replication files in an erasure-coded directory.


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[46/50] [abbrv] hadoop git commit: HADOOP-14754. TestCommonConfigurationFields failed: core-default.xml has 2 wasb properties missing in classes. Contributed by John Zhuge.

Posted by wa...@apache.org.
HADOOP-14754. TestCommonConfigurationFields failed: core-default.xml has 2 wasb properties missing in classes.
Contributed by John Zhuge.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d964062f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d964062f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d964062f

Branch: refs/heads/YARN-5881
Commit: d964062f66c0772f4b1a029bfcdff921fbaaf91c
Parents: f13ca94
Author: Steve Loughran <st...@apache.org>
Authored: Fri Aug 11 10:18:17 2017 +0100
Committer: Steve Loughran <st...@apache.org>
Committed: Fri Aug 11 10:18:17 2017 +0100

----------------------------------------------------------------------
 .../org/apache/hadoop/conf/TestCommonConfigurationFields.java  | 6 ++++++
 1 file changed, 6 insertions(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d964062f/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestCommonConfigurationFields.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestCommonConfigurationFields.java b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestCommonConfigurationFields.java
index da37e68..d0e0a35 100644
--- a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestCommonConfigurationFields.java
+++ b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestCommonConfigurationFields.java
@@ -103,6 +103,12 @@ public class TestCommonConfigurationFields extends TestConfigurationFieldsBase {
     xmlPrefixToSkipCompare.add("fs.s3n.");
     xmlPrefixToSkipCompare.add("s3native.");
 
+    // WASB properties are in a different subtree.
+    // - org.apache.hadoop.fs.azure.NativeAzureFileSystem
+    xmlPrefixToSkipCompare.add("fs.wasb.impl");
+    xmlPrefixToSkipCompare.add("fs.wasbs.impl");
+    xmlPrefixToSkipCompare.add("fs.azure.");
+
     // ADL properties are in a different subtree
     // - org.apache.hadoop.hdfs.web.ADLConfKeys
     xmlPrefixToSkipCompare.add("adl.");


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[18/50] [abbrv] hadoop git commit: YARN-6961. Remove commons-logging dependency from hadoop-yarn-server-applicationhistoryservice module. Contributed by Yeliang Cang.

Posted by wa...@apache.org.
YARN-6961. Remove commons-logging dependency from hadoop-yarn-server-applicationhistoryservice module. Contributed by Yeliang Cang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/98912950
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/98912950
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/98912950

Branch: refs/heads/YARN-5881
Commit: 98912950b6167523f6238a90ce69da817db91308
Parents: 55a181f
Author: Akira Ajisaka <aa...@apache.org>
Authored: Tue Aug 8 19:38:58 2017 +0900
Committer: Akira Ajisaka <aa...@apache.org>
Committed: Tue Aug 8 19:38:58 2017 +0900

----------------------------------------------------------------------
 .../hadoop-yarn-server-applicationhistoryservice/pom.xml         | 4 ----
 1 file changed, 4 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/98912950/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/pom.xml
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/pom.xml
index d732af4..cace493 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/pom.xml
@@ -131,10 +131,6 @@
       <groupId>com.google.guava</groupId>
       <artifactId>guava</artifactId>
     </dependency>
-    <dependency>
-      <groupId>commons-logging</groupId>
-      <artifactId>commons-logging</artifactId>
-    </dependency>
 
     <!-- 'mvn dependency:analyze' fails to detect use of this dependency -->
     <dependency>


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[47/50] [abbrv] hadoop git commit: HADOOP-10392. Use FileSystem#makeQualified(Path) instead of Path#makeQualified(FileSystem) (ajisakaa via aw)

Posted by wa...@apache.org.
HADOOP-10392. Use FileSystem#makeQualified(Path) instead of Path#makeQualified(FileSystem) (ajisakaa via aw)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4222c971
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4222c971
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4222c971

Branch: refs/heads/YARN-5881
Commit: 4222c971080f2b150713727092c7197df58c88e5
Parents: d964062
Author: Allen Wittenauer <aw...@apache.org>
Authored: Fri Aug 11 09:25:56 2017 -0700
Committer: Allen Wittenauer <aw...@apache.org>
Committed: Fri Aug 11 09:25:56 2017 -0700

----------------------------------------------------------------------
 .../java/org/apache/hadoop/fs/FileUtil.java     |  4 +--
 .../org/apache/hadoop/fs/ftp/FTPFileSystem.java |  4 +--
 .../java/org/apache/hadoop/io/SequenceFile.java |  2 +-
 .../apache/hadoop/fs/TestLocalFileSystem.java   |  6 ++---
 .../java/org/apache/hadoop/io/FileBench.java    |  2 +-
 .../mapred/MiniMRClientClusterFactory.java      |  4 +--
 .../mapred/TestCombineFileInputFormat.java      |  6 ++---
 .../TestCombineSequenceFileInputFormat.java     |  7 +++--
 .../mapred/TestCombineTextInputFormat.java      |  7 +++--
 .../mapred/TestConcatenatedCompressedInput.java |  6 ++---
 .../org/apache/hadoop/mapred/TestMapRed.java    |  4 +--
 .../hadoop/mapred/TestMiniMRChildTask.java      |  4 +--
 .../hadoop/mapred/TestTextInputFormat.java      |  8 +++---
 .../TestWrappedRecordReaderClassloader.java     |  4 +--
 .../lib/join/TestWrappedRRClassloader.java      |  4 +--
 .../mapreduce/util/MRAsyncDiskService.java      |  2 +-
 .../apache/hadoop/mapreduce/v2/TestMRJobs.java  |  4 +--
 .../v2/TestMRJobsWithHistoryService.java        |  4 +--
 .../org/apache/hadoop/tools/HadoopArchives.java |  2 +-
 .../apache/hadoop/mapred/gridmix/Gridmix.java   |  2 +-
 .../hadoop/mapred/gridmix/PseudoLocalFs.java    |  8 +++++-
 .../hadoop/mapred/gridmix/TestFilePool.java     |  4 +--
 .../hadoop/mapred/gridmix/TestFileQueue.java    |  8 +++---
 .../mapred/gridmix/TestPseudoLocalFs.java       |  2 +-
 .../hadoop/mapred/gridmix/TestUserResolve.java  |  4 +--
 .../hadoop/fs/swift/util/SwiftTestUtils.java    |  2 +-
 .../fs/swift/SwiftFileSystemBaseTest.java       |  2 +-
 .../TestSwiftFileSystemPartitionedUploads.java  |  4 +--
 .../hadoop/tools/rumen/TestHistograms.java      |  6 ++---
 .../org/apache/hadoop/streaming/StreamJob.java  | 27 ++++++++++----------
 30 files changed, 78 insertions(+), 75 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4222c971/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java
index eb8a5c3..72b9615 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java
@@ -295,8 +295,8 @@ public class FileUtil {
                                         Path dst)
                                         throws IOException {
     if (srcFS == dstFS) {
-      String srcq = src.makeQualified(srcFS).toString() + Path.SEPARATOR;
-      String dstq = dst.makeQualified(dstFS).toString() + Path.SEPARATOR;
+      String srcq = srcFS.makeQualified(src).toString() + Path.SEPARATOR;
+      String dstq = dstFS.makeQualified(dst).toString() + Path.SEPARATOR;
       if (dstq.startsWith(srcq)) {
         if (srcq.length() == dstq.length()) {
           throw new IOException("Cannot copy " + src + " to itself.");

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4222c971/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ftp/FTPFileSystem.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ftp/FTPFileSystem.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ftp/FTPFileSystem.java
index 4c1236b..644cf4e 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ftp/FTPFileSystem.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ftp/FTPFileSystem.java
@@ -505,7 +505,7 @@ public class FTPFileSystem extends FileSystem {
       long modTime = -1; // Modification time of root dir not known.
       Path root = new Path("/");
       return new FileStatus(length, isDir, blockReplication, blockSize,
-          modTime, root.makeQualified(this));
+          modTime, this.makeQualified(root));
     }
     String pathName = parentPath.toUri().getPath();
     FTPFile[] ftpFiles = client.listFiles(pathName);
@@ -546,7 +546,7 @@ public class FTPFileSystem extends FileSystem {
     String group = ftpFile.getGroup();
     Path filePath = new Path(parentPath, ftpFile.getName());
     return new FileStatus(length, isDir, blockReplication, blockSize, modTime,
-        accessTime, permission, user, group, filePath.makeQualified(this));
+        accessTime, permission, user, group, this.makeQualified(filePath));
   }
 
   @Override

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4222c971/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/SequenceFile.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/SequenceFile.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/SequenceFile.java
index 2cc0e40..f42848b 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/SequenceFile.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/SequenceFile.java
@@ -1883,7 +1883,7 @@ public class SequenceFile {
     @Deprecated
     public Reader(FileSystem fs, Path file, 
                   Configuration conf) throws IOException {
-      this(conf, file(file.makeQualified(fs)));
+      this(conf, file(fs.makeQualified(file)));
     }
 
     /**

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4222c971/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestLocalFileSystem.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestLocalFileSystem.java b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestLocalFileSystem.java
index 357c683..90eaa2a 100644
--- a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestLocalFileSystem.java
+++ b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestLocalFileSystem.java
@@ -218,8 +218,8 @@ public class TestLocalFileSystem {
 
   @Test
   public void testHomeDirectory() throws IOException {
-    Path home = new Path(System.getProperty("user.home"))
-      .makeQualified(fileSys);
+    Path home = fileSys.makeQualified(
+        new Path(System.getProperty("user.home")));
     Path fsHome = fileSys.getHomeDirectory();
     assertEquals(home, fsHome);
   }
@@ -229,7 +229,7 @@ public class TestLocalFileSystem {
     Path path = new Path(TEST_ROOT_DIR, "foo%bar");
     writeFile(fileSys, path, 1);
     FileStatus status = fileSys.getFileStatus(path);
-    assertEquals(path.makeQualified(fileSys), status.getPath());
+    assertEquals(fileSys.makeQualified(path), status.getPath());
     cleanupFile(fileSys, path);
   }
   

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4222c971/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/io/FileBench.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/io/FileBench.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/io/FileBench.java
index 0a9d0e9..ef68cdf 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/io/FileBench.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/io/FileBench.java
@@ -170,7 +170,7 @@ public class FileBench extends Configured implements Tool {
     for(int i = 0; i < argv.length; ++i) {
       try {
         if ("-dir".equals(argv[i])) {
-          root = new Path(argv[++i]).makeQualified(fs);
+          root = fs.makeQualified(new Path(argv[++i]));
           System.out.println("DIR: " + root.toString());
         } else if ("-seed".equals(argv[i])) {
           job.setLong("filebench.seed", Long.valueOf(argv[++i]));

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4222c971/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/MiniMRClientClusterFactory.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/MiniMRClientClusterFactory.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/MiniMRClientClusterFactory.java
index 023da48..85c534b 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/MiniMRClientClusterFactory.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/MiniMRClientClusterFactory.java
@@ -50,8 +50,8 @@ public class MiniMRClientClusterFactory {
 
     FileSystem fs = FileSystem.get(conf);
 
-    Path testRootDir = new Path("target", identifier + "-tmpDir")
-        .makeQualified(fs);
+    Path testRootDir = fs.makeQualified(
+        new Path("target", identifier + "-tmpDir"));
     Path appJar = new Path(testRootDir, "MRAppJar.jar");
 
     // Copy MRAppJar and make it private.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4222c971/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestCombineFileInputFormat.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestCombineFileInputFormat.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestCombineFileInputFormat.java
index ca3c2df..de7880d 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestCombineFileInputFormat.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestCombineFileInputFormat.java
@@ -47,9 +47,9 @@ public class TestCombineFileInputFormat {
       throw new RuntimeException("init failure", e);
     }
   }
-  private static Path workDir =
-    new Path(new Path(System.getProperty("test.build.data", "/tmp")),
-             "TestCombineFileInputFormat").makeQualified(localFs);
+  private static Path workDir = localFs.makeQualified(new Path(
+      System.getProperty("test.build.data", "/tmp"),
+      "TestCombineFileInputFormat"));
 
   private static void writeFile(FileSystem fs, Path name, 
                                 String contents) throws IOException {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4222c971/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestCombineSequenceFileInputFormat.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestCombineSequenceFileInputFormat.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestCombineSequenceFileInputFormat.java
index 8d0203e..8cdaa80 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestCombineSequenceFileInputFormat.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestCombineSequenceFileInputFormat.java
@@ -53,10 +53,9 @@ public class TestCombineSequenceFileInputFormat {
     }
   }
 
-  @SuppressWarnings("deprecation")
-  private static Path workDir =
-    new Path(new Path(System.getProperty("test.build.data", "/tmp")),
-             "TestCombineSequenceFileInputFormat").makeQualified(localFs);
+  private static Path workDir = localFs.makeQualified(new Path(
+      System.getProperty("test.build.data", "/tmp"),
+      "TestCombineSequenceFileInputFormat"));
 
   @Test(timeout=10000)
   public void testFormat() throws Exception {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4222c971/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestCombineTextInputFormat.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestCombineTextInputFormat.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestCombineTextInputFormat.java
index ca86dd5..581e62b 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestCombineTextInputFormat.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestCombineTextInputFormat.java
@@ -60,10 +60,9 @@ public class TestCombineTextInputFormat {
     }
   }
 
-  @SuppressWarnings("deprecation")
-  private static Path workDir =
-    new Path(new Path(System.getProperty("test.build.data", "/tmp")),
-             "TestCombineTextInputFormat").makeQualified(localFs);
+  private static Path workDir = localFs.makeQualified(new Path(
+      System.getProperty("test.build.data", "/tmp"),
+      "TestCombineTextInputFormat"));
 
   // A reporter that does nothing
   private static final Reporter voidReporter = Reporter.NULL;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4222c971/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestConcatenatedCompressedInput.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestConcatenatedCompressedInput.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestConcatenatedCompressedInput.java
index 22a05c5..15d651d 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestConcatenatedCompressedInput.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestConcatenatedCompressedInput.java
@@ -84,9 +84,9 @@ public class TestConcatenatedCompressedInput {
   public void after() {
     ZlibFactory.loadNativeZLib();
   }
-  private static Path workDir =
-    new Path(new Path(System.getProperty("test.build.data", "/tmp")),
-             "TestConcatenatedCompressedInput").makeQualified(localFs);
+  private static Path workDir = localFs.makeQualified(new Path(
+      System.getProperty("test.build.data", "/tmp"),
+      "TestConcatenatedCompressedInput"));
 
   private static LineReader makeStream(String str) throws IOException {
     return new LineReader(new ByteArrayInputStream(str.getBytes("UTF-8")),

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4222c971/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestMapRed.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestMapRed.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestMapRed.java
index d60905e..af09e09 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestMapRed.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestMapRed.java
@@ -342,8 +342,8 @@ public class TestMapRed extends Configured implements Tool {
       values.add(m);
       m = m.replace((char)('A' + i - 1), (char)('A' + i));
     }
-    Path testdir = new Path(
-        System.getProperty("test.build.data","/tmp")).makeQualified(fs);
+    Path testdir = fs.makeQualified(new Path(
+        System.getProperty("test.build.data","/tmp")));
     fs.delete(testdir, true);
     Path inFile = new Path(testdir, "nullin/blah");
     SequenceFile.Writer w = SequenceFile.createWriter(fs, conf, inFile,

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4222c971/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestMiniMRChildTask.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestMiniMRChildTask.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestMiniMRChildTask.java
index f690118..51f0120 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestMiniMRChildTask.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestMiniMRChildTask.java
@@ -75,8 +75,8 @@ public class TestMiniMRChildTask {
     }
   }
 
-  private static Path TEST_ROOT_DIR = new Path("target",
-      TestMiniMRChildTask.class.getName() + "-tmpDir").makeQualified(localFs);
+  private static Path TEST_ROOT_DIR = localFs.makeQualified(
+      new Path("target", TestMiniMRChildTask.class.getName() + "-tmpDir"));
   static Path APP_JAR = new Path(TEST_ROOT_DIR, "MRAppJar.jar");
 
   /**

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4222c971/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestTextInputFormat.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestTextInputFormat.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestTextInputFormat.java
index 5106c38..67bd497 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestTextInputFormat.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestTextInputFormat.java
@@ -61,10 +61,10 @@ public class TestTextInputFormat {
       throw new RuntimeException("init failure", e);
     }
   }
-  @SuppressWarnings("deprecation")
-  private static Path workDir =
-    new Path(new Path(System.getProperty("test.build.data", "/tmp")),
-             "TestTextInputFormat").makeQualified(localFs);
+
+  private static Path workDir = localFs.makeQualified(new Path(
+      System.getProperty("test.build.data", "/tmp"),
+      "TestTextInputFormat"));
 
   @Test (timeout=500000)
   public void testFormat() throws Exception {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4222c971/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/join/TestWrappedRecordReaderClassloader.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/join/TestWrappedRecordReaderClassloader.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/join/TestWrappedRecordReaderClassloader.java
index ae5572f..785898d 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/join/TestWrappedRecordReaderClassloader.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/join/TestWrappedRecordReaderClassloader.java
@@ -50,8 +50,8 @@ public class TestWrappedRecordReaderClassloader {
     assertTrue(job.getClassLoader() instanceof Fake_ClassLoader);
 
     FileSystem fs = FileSystem.get(job);
-    Path testdir = new Path(System.getProperty("test.build.data", "/tmp"))
-        .makeQualified(fs);
+    Path testdir = fs.makeQualified(new Path(
+        System.getProperty("test.build.data", "/tmp")));
 
     Path base = new Path(testdir, "/empty");
     Path[] src = { new Path(base, "i0"), new Path("i1"), new Path("i2") };

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4222c971/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/lib/join/TestWrappedRRClassloader.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/lib/join/TestWrappedRRClassloader.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/lib/join/TestWrappedRRClassloader.java
index 680e246..e3d7fa0 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/lib/join/TestWrappedRRClassloader.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/lib/join/TestWrappedRRClassloader.java
@@ -50,8 +50,8 @@ public class TestWrappedRRClassloader {
     assertTrue(conf.getClassLoader() instanceof Fake_ClassLoader);
 
     FileSystem fs = FileSystem.get(conf);
-    Path testdir = new Path(System.getProperty("test.build.data", "/tmp"))
-        .makeQualified(fs);
+    Path testdir = fs.makeQualified(new Path(
+        System.getProperty("test.build.data", "/tmp")));
 
     Path base = new Path(testdir, "/empty");
     Path[] src = { new Path(base, "i0"), new Path("i1"), new Path("i2") };

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4222c971/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/util/MRAsyncDiskService.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/util/MRAsyncDiskService.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/util/MRAsyncDiskService.java
index 4446756..be46385 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/util/MRAsyncDiskService.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/util/MRAsyncDiskService.java
@@ -330,7 +330,7 @@ public class MRAsyncDiskService {
    * Returns the normalized path of a path.
    */
   private String normalizePath(String path) {
-    return (new Path(path)).makeQualified(this.localFileSystem)
+    return this.localFileSystem.makeQualified(new Path(path))
         .toUri().getPath();
   }
   

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4222c971/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/v2/TestMRJobs.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/v2/TestMRJobs.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/v2/TestMRJobs.java
index c6d2168..274f405 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/v2/TestMRJobs.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/v2/TestMRJobs.java
@@ -128,8 +128,8 @@ public class TestMRJobs {
     }
   }
 
-  private static Path TEST_ROOT_DIR = new Path("target",
-      TestMRJobs.class.getName() + "-tmpDir").makeQualified(localFs);
+  private static Path TEST_ROOT_DIR = localFs.makeQualified(
+      new Path("target", TestMRJobs.class.getName() + "-tmpDir"));
   static Path APP_JAR = new Path(TEST_ROOT_DIR, "MRAppJar.jar");
   private static final String OUTPUT_ROOT_DIR = "/tmp/" +
     TestMRJobs.class.getSimpleName();

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4222c971/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/v2/TestMRJobsWithHistoryService.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/v2/TestMRJobsWithHistoryService.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/v2/TestMRJobsWithHistoryService.java
index f9236a9..98a6de2 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/v2/TestMRJobsWithHistoryService.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/v2/TestMRJobsWithHistoryService.java
@@ -73,8 +73,8 @@ public class TestMRJobsWithHistoryService {
     }
   }
 
-  private static Path TEST_ROOT_DIR = new Path("target",
-      TestMRJobs.class.getName() + "-tmpDir").makeQualified(localFs);
+  private static Path TEST_ROOT_DIR = localFs.makeQualified(
+      new Path("target", TestMRJobs.class.getName() + "-tmpDir"));
   static Path APP_JAR = new Path(TEST_ROOT_DIR, "MRAppJar.jar");
 
   @Before

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4222c971/hadoop-tools/hadoop-archives/src/main/java/org/apache/hadoop/tools/HadoopArchives.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-archives/src/main/java/org/apache/hadoop/tools/HadoopArchives.java b/hadoop-tools/hadoop-archives/src/main/java/org/apache/hadoop/tools/HadoopArchives.java
index c2097dc..8ad8600 100644
--- a/hadoop-tools/hadoop-archives/src/main/java/org/apache/hadoop/tools/HadoopArchives.java
+++ b/hadoop-tools/hadoop-archives/src/main/java/org/apache/hadoop/tools/HadoopArchives.java
@@ -473,7 +473,7 @@ public class HadoopArchives implements Tool {
     conf.setLong(HAR_BLOCKSIZE_LABEL, blockSize);
     conf.setLong(HAR_PARTSIZE_LABEL, partSize);
     conf.set(DST_HAR_LABEL, archiveName);
-    conf.set(SRC_PARENT_LABEL, parentPath.makeQualified(fs).toString());
+    conf.set(SRC_PARENT_LABEL, fs.makeQualified(parentPath).toString());
     conf.setInt(HAR_REPLICATION_LABEL, repl);
     Path outputPath = new Path(dest, archiveName);
     FileOutputFormat.setOutputPath(conf, outputPath);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4222c971/hadoop-tools/hadoop-gridmix/src/main/java/org/apache/hadoop/mapred/gridmix/Gridmix.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-gridmix/src/main/java/org/apache/hadoop/mapred/gridmix/Gridmix.java b/hadoop-tools/hadoop-gridmix/src/main/java/org/apache/hadoop/mapred/gridmix/Gridmix.java
index 4386bc1..3507b7f 100644
--- a/hadoop-tools/hadoop-gridmix/src/main/java/org/apache/hadoop/mapred/gridmix/Gridmix.java
+++ b/hadoop-tools/hadoop-gridmix/src/main/java/org/apache/hadoop/mapred/gridmix/Gridmix.java
@@ -447,7 +447,7 @@ public class Gridmix extends Configured implements Tool {
 
     // Create <ioPath> with 777 permissions
     final FileSystem inputFs = ioPath.getFileSystem(conf);
-    ioPath = ioPath.makeQualified(inputFs);
+    ioPath = inputFs.makeQualified(ioPath);
     boolean succeeded = false;
     try {
       succeeded = FileSystem.mkdirs(inputFs, ioPath,

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4222c971/hadoop-tools/hadoop-gridmix/src/main/java/org/apache/hadoop/mapred/gridmix/PseudoLocalFs.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-gridmix/src/main/java/org/apache/hadoop/mapred/gridmix/PseudoLocalFs.java b/hadoop-tools/hadoop-gridmix/src/main/java/org/apache/hadoop/mapred/gridmix/PseudoLocalFs.java
index d7ef563..15fc68e 100644
--- a/hadoop-tools/hadoop-gridmix/src/main/java/org/apache/hadoop/mapred/gridmix/PseudoLocalFs.java
+++ b/hadoop-tools/hadoop-gridmix/src/main/java/org/apache/hadoop/mapred/gridmix/PseudoLocalFs.java
@@ -116,7 +116,7 @@ class PseudoLocalFs extends FileSystem {
    * @throws FileNotFoundException
    */
   long validateFileNameFormat(Path path) throws FileNotFoundException {
-    path = path.makeQualified(this);
+    path = this.makeQualified(path);
     boolean valid = true;
     long fileSize = 0;
     if (!path.toUri().getScheme().equals(getUri().getScheme())) {
@@ -329,4 +329,10 @@ class PseudoLocalFs extends FileSystem {
     throw new UnsupportedOperationException("SetWorkingDirectory "
         + "is not supported in pseudo local file system.");
   }
+
+  @Override
+  public Path makeQualified(Path path) {
+    // skip FileSystem#checkPath() to validate some other Filesystems
+    return path.makeQualified(this.getUri(), this.getWorkingDirectory());
+  }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4222c971/hadoop-tools/hadoop-gridmix/src/test/java/org/apache/hadoop/mapred/gridmix/TestFilePool.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-gridmix/src/test/java/org/apache/hadoop/mapred/gridmix/TestFilePool.java b/hadoop-tools/hadoop-gridmix/src/test/java/org/apache/hadoop/mapred/gridmix/TestFilePool.java
index 4be90c6..a75414a 100644
--- a/hadoop-tools/hadoop-gridmix/src/test/java/org/apache/hadoop/mapred/gridmix/TestFilePool.java
+++ b/hadoop-tools/hadoop-gridmix/src/test/java/org/apache/hadoop/mapred/gridmix/TestFilePool.java
@@ -48,8 +48,8 @@ public class TestFilePool {
     try {
       final Configuration conf = new Configuration();
       final FileSystem fs = FileSystem.getLocal(conf).getRaw();
-      return new Path(System.getProperty("test.build.data", "/tmp"),
-          "testFilePool").makeQualified(fs);
+      return fs.makeQualified(new Path(
+          System.getProperty("test.build.data", "/tmp"), "testFilePool"));
     } catch (IOException e) {
       fail();
     }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4222c971/hadoop-tools/hadoop-gridmix/src/test/java/org/apache/hadoop/mapred/gridmix/TestFileQueue.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-gridmix/src/test/java/org/apache/hadoop/mapred/gridmix/TestFileQueue.java b/hadoop-tools/hadoop-gridmix/src/test/java/org/apache/hadoop/mapred/gridmix/TestFileQueue.java
index a4668ee..e68e83f 100644
--- a/hadoop-tools/hadoop-gridmix/src/test/java/org/apache/hadoop/mapred/gridmix/TestFileQueue.java
+++ b/hadoop-tools/hadoop-gridmix/src/test/java/org/apache/hadoop/mapred/gridmix/TestFileQueue.java
@@ -48,8 +48,8 @@ public class TestFileQueue {
   public static void setup() throws IOException {
     final Configuration conf = new Configuration();
     final FileSystem fs = FileSystem.getLocal(conf).getRaw();
-    final Path p = new Path(System.getProperty("test.build.data", "/tmp"),
-        "testFileQueue").makeQualified(fs);
+    final Path p = fs.makeQualified(new Path(
+        System.getProperty("test.build.data", "/tmp"), "testFileQueue"));
     fs.delete(p, true);
     final byte[] b = new byte[BLOCK];
     for (int i = 0; i < NFILES; ++i) {
@@ -71,8 +71,8 @@ public class TestFileQueue {
   public static void cleanup() throws IOException {
     final Configuration conf = new Configuration();
     final FileSystem fs = FileSystem.getLocal(conf).getRaw();
-    final Path p = new Path(System.getProperty("test.build.data", "/tmp"),
-        "testFileQueue").makeQualified(fs);
+    final Path p = fs.makeQualified(new Path(
+        System.getProperty("test.build.data", "/tmp"), "testFileQueue"));
     fs.delete(p, true);
   }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4222c971/hadoop-tools/hadoop-gridmix/src/test/java/org/apache/hadoop/mapred/gridmix/TestPseudoLocalFs.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-gridmix/src/test/java/org/apache/hadoop/mapred/gridmix/TestPseudoLocalFs.java b/hadoop-tools/hadoop-gridmix/src/test/java/org/apache/hadoop/mapred/gridmix/TestPseudoLocalFs.java
index a607ece..7179c5d 100644
--- a/hadoop-tools/hadoop-gridmix/src/test/java/org/apache/hadoop/mapred/gridmix/TestPseudoLocalFs.java
+++ b/hadoop-tools/hadoop-gridmix/src/test/java/org/apache/hadoop/mapred/gridmix/TestPseudoLocalFs.java
@@ -224,7 +224,7 @@ public class TestPseudoLocalFs {
 
     // Validate operations on valid qualified path
     path = new Path("myPsedoFile.1237");
-    path = path.makeQualified(pfs);
+    path = pfs.makeQualified(path);
     validateGetFileStatus(pfs, path, true);
     validateCreate(pfs, path, true);
     validateOpen(pfs, path, true);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4222c971/hadoop-tools/hadoop-gridmix/src/test/java/org/apache/hadoop/mapred/gridmix/TestUserResolve.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-gridmix/src/test/java/org/apache/hadoop/mapred/gridmix/TestUserResolve.java b/hadoop-tools/hadoop-gridmix/src/test/java/org/apache/hadoop/mapred/gridmix/TestUserResolve.java
index 8050f33..4407515 100644
--- a/hadoop-tools/hadoop-gridmix/src/test/java/org/apache/hadoop/mapred/gridmix/TestUserResolve.java
+++ b/hadoop-tools/hadoop-gridmix/src/test/java/org/apache/hadoop/mapred/gridmix/TestUserResolve.java
@@ -40,8 +40,8 @@ public class TestUserResolve {
   public static void createRootDir() throws IOException {
     conf = new Configuration();
     fs = FileSystem.getLocal(conf);
-    rootDir = new Path(new Path(System.getProperty("test.build.data", "/tmp"))
-                 .makeQualified(fs), "gridmixUserResolve");
+    rootDir = new Path(fs.makeQualified(new Path(
+        System.getProperty("test.build.data", "/tmp"))), "gridmixUserResolve");
   }
 
   /**

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4222c971/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/util/SwiftTestUtils.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/util/SwiftTestUtils.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/util/SwiftTestUtils.java
index f91ba30..726045e 100644
--- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/util/SwiftTestUtils.java
+++ b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/util/SwiftTestUtils.java
@@ -278,7 +278,7 @@ public class SwiftTestUtils extends org.junit.Assert {
     noteAction(action);
     try {
       if (fileSystem != null) {
-        fileSystem.delete(new Path(cleanupPath).makeQualified(fileSystem),
+        fileSystem.delete(fileSystem.makeQualified(new Path(cleanupPath)),
                           true);
       }
     } catch (Exception e) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4222c971/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/SwiftFileSystemBaseTest.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/SwiftFileSystemBaseTest.java b/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/SwiftFileSystemBaseTest.java
index 12f58e6..99e03c7 100644
--- a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/SwiftFileSystemBaseTest.java
+++ b/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/SwiftFileSystemBaseTest.java
@@ -159,7 +159,7 @@ public class SwiftFileSystemBaseTest extends Assert implements
    * @return a qualified path instance
    */
   protected Path path(String pathString) {
-    return new Path(pathString).makeQualified(fs);
+    return fs.makeQualified(new Path(pathString));
   }
 
   /**

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4222c971/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestSwiftFileSystemPartitionedUploads.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestSwiftFileSystemPartitionedUploads.java b/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestSwiftFileSystemPartitionedUploads.java
index f344093..b42abcd 100644
--- a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestSwiftFileSystemPartitionedUploads.java
+++ b/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestSwiftFileSystemPartitionedUploads.java
@@ -126,7 +126,7 @@ public class TestSwiftFileSystemPartitionedUploads extends
       SwiftTestUtils.compareByteArrays(src, dest, len);
       FileStatus status;
 
-      final Path qualifiedPath = path.makeQualified(fs);
+      final Path qualifiedPath = fs.makeQualified(path);
       status = fs.getFileStatus(qualifiedPath);
       //now see what block location info comes back.
       //This will vary depending on the Swift version, so the results
@@ -216,7 +216,7 @@ public class TestSwiftFileSystemPartitionedUploads extends
 
   private FileStatus validatePathLen(Path path, int len) throws IOException {
     //verify that the length is what was written in a direct status check
-    final Path qualifiedPath = path.makeQualified(fs);
+    final Path qualifiedPath = fs.makeQualified(path);
     FileStatus[] parentDirListing = fs.listStatus(qualifiedPath.getParent());
     StringBuilder listing = lsToString(parentDirListing);
     String parentDirLS = listing.toString();

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4222c971/hadoop-tools/hadoop-rumen/src/test/java/org/apache/hadoop/tools/rumen/TestHistograms.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-rumen/src/test/java/org/apache/hadoop/tools/rumen/TestHistograms.java b/hadoop-tools/hadoop-rumen/src/test/java/org/apache/hadoop/tools/rumen/TestHistograms.java
index 206095a..52caaf5 100644
--- a/hadoop-tools/hadoop-rumen/src/test/java/org/apache/hadoop/tools/rumen/TestHistograms.java
+++ b/hadoop-tools/hadoop-rumen/src/test/java/org/apache/hadoop/tools/rumen/TestHistograms.java
@@ -57,8 +57,8 @@ public class TestHistograms {
   public void testHistograms() throws IOException {
     final Configuration conf = new Configuration();
     final FileSystem lfs = FileSystem.getLocal(conf);
-    final Path rootInputDir = new Path(
-        System.getProperty("test.tools.input.dir", "")).makeQualified(lfs);
+    final Path rootInputDir = lfs.makeQualified(new Path(
+        System.getProperty("test.tools.input.dir", "target/input")));
     final Path rootInputFile = new Path(rootInputDir, "rumen/histogram-tests");
 
 
@@ -132,7 +132,7 @@ public class TestHistograms {
     final FileSystem lfs = FileSystem.getLocal(conf);
 
     for (String arg : args) {
-      Path filePath = new Path(arg).makeQualified(lfs);
+      Path filePath = lfs.makeQualified(new Path(arg));
       String fileName = filePath.getName();
       if (fileName.startsWith("input")) {
         LoggedDiscreteCDF newResult = histogramFileToCDF(filePath, lfs);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4222c971/hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/streaming/StreamJob.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/streaming/StreamJob.java b/hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/streaming/StreamJob.java
index 9f5b293..0b239d0 100644
--- a/hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/streaming/StreamJob.java
+++ b/hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/streaming/StreamJob.java
@@ -22,13 +22,11 @@ import java.io.File;
 import java.io.FileNotFoundException;
 import java.io.IOException;
 import java.net.URI;
-import java.net.URISyntaxException;
 import java.net.URLEncoder;
 import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.List;
 import java.util.Map;
-import java.util.regex.Pattern;
 import java.util.TreeMap;
 import java.util.TreeSet;
 
@@ -41,12 +39,12 @@ import org.apache.commons.cli.Options;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.permission.FsAction;
 import org.apache.hadoop.mapreduce.MRConfig;
 import org.apache.hadoop.mapreduce.MRJobConfig;
 import org.apache.hadoop.mapreduce.filecache.DistributedCache;
 import org.apache.hadoop.mapreduce.server.jobtracker.JTConfig;
 import org.apache.hadoop.fs.FileSystem;
-import org.apache.hadoop.fs.FileUtil;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.FileAlreadyExistsException;
 import org.apache.hadoop.mapred.FileInputFormat;
@@ -56,7 +54,6 @@ import org.apache.hadoop.mapred.JobClient;
 import org.apache.hadoop.mapred.JobConf;
 import org.apache.hadoop.mapred.JobID;
 import org.apache.hadoop.mapred.KeyValueTextInputFormat;
-import org.apache.hadoop.mapred.OutputFormat;
 import org.apache.hadoop.mapred.RunningJob;
 import org.apache.hadoop.mapred.SequenceFileAsTextInputFormat;
 import org.apache.hadoop.mapred.SequenceFileInputFormat;
@@ -65,6 +62,7 @@ import org.apache.hadoop.mapred.TextOutputFormat;
 import org.apache.hadoop.mapred.lib.LazyOutputFormat;
 import org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorCombiner;
 import org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorReducer;
+import org.apache.hadoop.security.AccessControlException;
 import org.apache.hadoop.streaming.io.IdentifierResolver;
 import org.apache.hadoop.streaming.io.InputWriter;
 import org.apache.hadoop.streaming.io.OutputReader;
@@ -297,7 +295,10 @@ public class StreamJob implements Tool {
           try {
             Path path = new Path(file);
             FileSystem localFs = FileSystem.getLocal(config_);
-            String finalPath = path.makeQualified(localFs).toString();
+            Path qualifiedPath = path.makeQualified(
+                localFs.getUri(), localFs.getWorkingDirectory());
+            validate(qualifiedPath);
+            String finalPath = qualifiedPath.toString();
             if(fileList.length() > 0) {
               fileList.append(',');
             }
@@ -313,7 +314,6 @@ public class StreamJob implements Tool {
           tmpFiles = tmpFiles + "," + fileList;
         }
         config_.set("tmpfiles", tmpFiles);
-        validate(packageFiles_);
       }
 
       String fsName = cmdLine.getOptionValue("dfs");
@@ -391,14 +391,13 @@ public class StreamJob implements Tool {
     return OptionBuilder.withDescription(desc).create(name);
   }
 
-  private void validate(final List<String> values)
-  throws IllegalArgumentException {
-    for (String file : values) {
-      File f = new File(file);
-      if (!FileUtil.canRead(f)) {
-        fail("File: " + f.getAbsolutePath()
-          + " does not exist, or is not readable.");
-      }
+  private void validate(final Path path) throws IOException {
+    try {
+      path.getFileSystem(config_).access(path, FsAction.READ);
+    } catch (FileNotFoundException e) {
+      fail("File: " + path + " does not exist.");
+    } catch (AccessControlException e) {
+      fail("File: " + path + " is not readable.");
     }
   }
 


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[09/50] [abbrv] hadoop git commit: YARN-6873. Moving logging APIs over to slf4j in hadoop-yarn-server-applicationhistoryservice. Contributed by Yeliang Cang.

Posted by wa...@apache.org.
YARN-6873. Moving logging APIs over to slf4j in hadoop-yarn-server-applicationhistoryservice. Contributed by Yeliang Cang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/839e077f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/839e077f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/839e077f

Branch: refs/heads/YARN-5881
Commit: 839e077faf4019d6efdcd89d95930023cd0b0a08
Parents: a4eb701
Author: Akira Ajisaka <aa...@apache.org>
Authored: Mon Aug 7 18:56:00 2017 +0900
Committer: Akira Ajisaka <aa...@apache.org>
Committed: Mon Aug 7 18:56:00 2017 +0900

----------------------------------------------------------------------
 .../ApplicationHistoryClientService.java        |  8 ++---
 .../ApplicationHistoryManagerImpl.java          |  8 ++---
 ...pplicationHistoryManagerOnTimelineStore.java |  8 ++---
 .../ApplicationHistoryServer.java               | 10 +++---
 .../FileSystemApplicationHistoryStore.java      | 22 ++++++------
 .../webapp/AHSWebServices.java                  |  7 ++--
 .../webapp/NavBlock.java                        |  8 ++---
 .../timeline/KeyValueBasedTimelineStore.java    |  8 ++---
 .../server/timeline/LeveldbTimelineStore.java   | 35 ++++++++++----------
 .../yarn/server/timeline/RollingLevelDB.java    | 15 +++++----
 .../timeline/RollingLevelDBTimelineStore.java   | 22 ++++++------
 .../server/timeline/TimelineDataManager.java    |  7 ++--
 .../recovery/LeveldbTimelineStateStore.java     | 30 ++++++++---------
 .../timeline/security/TimelineACLsManager.java  |  7 ++--
 ...lineDelegationTokenSecretManagerService.java |  8 ++---
 .../timeline/webapp/TimelineWebServices.java    |  7 ++--
 .../TestFileSystemApplicationHistoryStore.java  |  8 ++---
 .../timeline/TestLeveldbTimelineStore.java      |  2 +-
 18 files changed, 111 insertions(+), 109 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/839e077f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryClientService.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryClientService.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryClientService.java
index 73d5d39..7d57048 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryClientService.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryClientService.java
@@ -22,8 +22,6 @@ import java.io.IOException;
 import java.net.InetSocketAddress;
 import java.util.ArrayList;
 
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.classification.InterfaceAudience.Private;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
@@ -61,11 +59,13 @@ import org.apache.hadoop.yarn.ipc.YarnRPC;
 import org.apache.hadoop.yarn.server.timeline.security.authorize.TimelinePolicyProvider;
 
 import com.google.common.base.Preconditions;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 public class ApplicationHistoryClientService extends AbstractService implements
     ApplicationHistoryProtocol {
-  private static final Log LOG = LogFactory
-    .getLog(ApplicationHistoryClientService.class);
+  private static final Logger LOG =
+          LoggerFactory.getLogger(ApplicationHistoryClientService.class);
   private ApplicationHistoryManager history;
   private Server server;
   private InetSocketAddress bindAddress;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/839e077f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryManagerImpl.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryManagerImpl.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryManagerImpl.java
index 130bb32..b8931d8 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryManagerImpl.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryManagerImpl.java
@@ -23,8 +23,6 @@ import java.util.HashMap;
 import java.util.Map;
 import java.util.Map.Entry;
 
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.classification.InterfaceAudience.Private;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.service.AbstractService;
@@ -42,11 +40,13 @@ import org.apache.hadoop.yarn.server.applicationhistoryservice.records.Container
 import org.apache.hadoop.yarn.webapp.util.WebAppUtils;
 
 import com.google.common.annotations.VisibleForTesting;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 public class ApplicationHistoryManagerImpl extends AbstractService implements
     ApplicationHistoryManager {
-  private static final Log LOG = LogFactory
-    .getLog(ApplicationHistoryManagerImpl.class);
+  private static final Logger LOG =
+          LoggerFactory.getLogger(ApplicationHistoryManagerImpl.class);
   private static final String UNAVAILABLE = "N/A";
 
   private ApplicationHistoryStore historyStore;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/839e077f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryManagerOnTimelineStore.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryManagerOnTimelineStore.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryManagerOnTimelineStore.java
index 5404338..9240ed8 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryManagerOnTimelineStore.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryManagerOnTimelineStore.java
@@ -28,8 +28,6 @@ import java.util.List;
 import java.util.Map;
 import java.util.Set;
 
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.security.authorize.AuthorizationException;
@@ -69,12 +67,14 @@ import org.apache.hadoop.yarn.util.ConverterUtils;
 import org.apache.hadoop.yarn.webapp.util.WebAppUtils;
 
 import com.google.common.annotations.VisibleForTesting;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 public class ApplicationHistoryManagerOnTimelineStore extends AbstractService
     implements
     ApplicationHistoryManager {
-  private static final Log LOG = LogFactory
-      .getLog(ApplicationHistoryManagerOnTimelineStore.class);
+  private static final Logger LOG = LoggerFactory
+      .getLogger(ApplicationHistoryManagerOnTimelineStore.class);
 
   @VisibleForTesting
   static final String UNAVAILABLE = "N/A";

http://git-wip-us.apache.org/repos/asf/hadoop/blob/839e077f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryServer.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryServer.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryServer.java
index 6e6e98b..85e5f2d 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryServer.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryServer.java
@@ -22,8 +22,6 @@ import java.io.IOException;
 import java.net.InetSocketAddress;
 import java.util.ArrayList;
 
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.classification.InterfaceAudience.Private;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.http.HttpServer2;
@@ -60,6 +58,8 @@ import org.eclipse.jetty.servlet.FilterHolder;
 import org.eclipse.jetty.webapp.WebAppContext;
 
 import com.google.common.annotations.VisibleForTesting;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 /**
  * History server that keeps track of all types of history in the cluster.
@@ -68,8 +68,8 @@ import com.google.common.annotations.VisibleForTesting;
 public class ApplicationHistoryServer extends CompositeService {
 
   public static final int SHUTDOWN_HOOK_PRIORITY = 30;
-  private static final Log LOG = LogFactory
-    .getLog(ApplicationHistoryServer.class);
+  private static final Logger LOG = LoggerFactory
+      .getLogger(ApplicationHistoryServer.class);
 
   private ApplicationHistoryClientService ahsClientService;
   private ApplicationACLsManager aclsManager;
@@ -178,7 +178,7 @@ public class ApplicationHistoryServer extends CompositeService {
       appHistoryServer.init(conf);
       appHistoryServer.start();
     } catch (Throwable t) {
-      LOG.fatal("Error starting ApplicationHistoryServer", t);
+      LOG.error("Error starting ApplicationHistoryServer", t);
       ExitUtil.terminate(-1, "Error starting ApplicationHistoryServer");
     }
     return appHistoryServer;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/839e077f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/FileSystemApplicationHistoryStore.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/FileSystemApplicationHistoryStore.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/FileSystemApplicationHistoryStore.java
index be7bc6d..fa2da44 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/FileSystemApplicationHistoryStore.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/FileSystemApplicationHistoryStore.java
@@ -30,8 +30,6 @@ import java.util.Map.Entry;
 import java.util.concurrent.ConcurrentHashMap;
 import java.util.concurrent.ConcurrentMap;
 
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.classification.InterfaceAudience.Public;
 import org.apache.hadoop.classification.InterfaceStability.Unstable;
 import org.apache.hadoop.conf.Configuration;
@@ -74,6 +72,8 @@ import org.apache.hadoop.yarn.server.applicationhistoryservice.records.impl.pb.C
 import org.apache.hadoop.yarn.util.ConverterUtils;
 
 import com.google.protobuf.InvalidProtocolBufferException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 /**
  * File system implementation of {@link ApplicationHistoryStore}. In this
@@ -89,8 +89,8 @@ import com.google.protobuf.InvalidProtocolBufferException;
 public class FileSystemApplicationHistoryStore extends AbstractService
     implements ApplicationHistoryStore {
 
-  private static final Log LOG = LogFactory
-    .getLog(FileSystemApplicationHistoryStore.class);
+  private static final Logger LOG = LoggerFactory
+      .getLogger(FileSystemApplicationHistoryStore.class);
 
   private static final String ROOT_DIR_NAME = "ApplicationHistoryDataRoot";
   private static final int MIN_BLOCK_SIZE = 256 * 1024;
@@ -141,7 +141,7 @@ public class FileSystemApplicationHistoryStore extends AbstractService
       }
       outstandingWriters.clear();
     } finally {
-      IOUtils.cleanup(LOG, fs);
+      IOUtils.cleanupWithLogger(LOG, fs);
     }
     super.serviceStop();
   }
@@ -711,12 +711,12 @@ public class FileSystemApplicationHistoryStore extends AbstractService
     }
 
     public void reset() throws IOException {
-      IOUtils.cleanup(LOG, scanner);
+      IOUtils.cleanupWithLogger(LOG, scanner);
       scanner = reader.createScanner();
     }
 
     public void close() {
-      IOUtils.cleanup(LOG, scanner, reader, fsdis);
+      IOUtils.cleanupWithLogger(LOG, scanner, reader, fsdis);
     }
 
   }
@@ -740,13 +740,13 @@ public class FileSystemApplicationHistoryStore extends AbstractService
                 YarnConfiguration.DEFAULT_FS_APPLICATION_HISTORY_STORE_COMPRESSION_TYPE), null,
                 getConfig());
       } catch (IOException e) {
-        IOUtils.cleanup(LOG, fsdos);
+        IOUtils.cleanupWithLogger(LOG, fsdos);
         throw e;
       }
     }
 
     public synchronized void close() {
-      IOUtils.cleanup(LOG, writer, fsdos);
+      IOUtils.cleanupWithLogger(LOG, writer, fsdos);
     }
 
     public synchronized void writeHistoryData(HistoryDataKey key, byte[] value)
@@ -756,13 +756,13 @@ public class FileSystemApplicationHistoryStore extends AbstractService
         dos = writer.prepareAppendKey(-1);
         key.write(dos);
       } finally {
-        IOUtils.cleanup(LOG, dos);
+        IOUtils.cleanupWithLogger(LOG, dos);
       }
       try {
         dos = writer.prepareAppendValue(value.length);
         dos.write(value);
       } finally {
-        IOUtils.cleanup(LOG, dos);
+        IOUtils.cleanupWithLogger(LOG, dos);
       }
     }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/839e077f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AHSWebServices.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AHSWebServices.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AHSWebServices.java
index 6195199..13410a8 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AHSWebServices.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AHSWebServices.java
@@ -42,8 +42,6 @@ import javax.ws.rs.core.Response;
 import javax.ws.rs.core.StreamingOutput;
 import javax.ws.rs.core.Response.ResponseBuilder;
 import javax.ws.rs.core.Response.Status;
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.classification.InterfaceAudience.Private;
 import org.apache.hadoop.classification.InterfaceAudience.Public;
 import org.apache.hadoop.classification.InterfaceStability.Unstable;
@@ -80,12 +78,15 @@ import com.google.inject.Inject;
 import com.google.inject.Singleton;
 import com.sun.jersey.api.client.ClientHandlerException;
 import com.sun.jersey.api.client.UniformInterfaceException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 @Singleton
 @Path("/ws/v1/applicationhistory")
 public class AHSWebServices extends WebServices {
 
-  private static final Log LOG = LogFactory.getLog(AHSWebServices.class);
+  private static final Logger LOG = LoggerFactory
+      .getLogger(AHSWebServices.class);
   private static final String NM_DOWNLOAD_URI_STR =
       "/ws/v1/node/containers";
   private static final Joiner JOINER = Joiner.on("");

http://git-wip-us.apache.org/repos/asf/hadoop/blob/839e077f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/NavBlock.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/NavBlock.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/NavBlock.java
index 3ee4dd1..915af4a 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/NavBlock.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/NavBlock.java
@@ -18,21 +18,19 @@
 
 package org.apache.hadoop.yarn.server.applicationhistoryservice.webapp;
 
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.apache.commons.logging.impl.Log4JLogger;
 import org.apache.hadoop.yarn.api.records.YarnApplicationState;
 import org.apache.hadoop.yarn.util.Log4jWarningErrorMetricsAppender;
 import org.apache.hadoop.yarn.webapp.hamlet2.Hamlet;
 import org.apache.hadoop.yarn.webapp.view.HtmlBlock;
 
+import static org.apache.hadoop.util.GenericsUtil.isLog4jLogger;
+
 public class NavBlock extends HtmlBlock {
 
   @Override
   public void render(Block html) {
     boolean addErrorsAndWarningsLink = false;
-    Log log = LogFactory.getLog(NavBlock.class);
-    if (log instanceof Log4JLogger) {
+    if (isLog4jLogger(NavBlock.class)) {
       Log4jWarningErrorMetricsAppender appender =
           Log4jWarningErrorMetricsAppender.findAppender();
       if (appender != null) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/839e077f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/KeyValueBasedTimelineStore.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/KeyValueBasedTimelineStore.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/KeyValueBasedTimelineStore.java
index 79e2bf2..82db770 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/KeyValueBasedTimelineStore.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/KeyValueBasedTimelineStore.java
@@ -18,8 +18,6 @@
 
 package org.apache.hadoop.yarn.server.timeline;
 
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.classification.InterfaceAudience.Private;
 import org.apache.hadoop.classification.InterfaceStability.Unstable;
 import org.apache.hadoop.service.AbstractService;
@@ -33,6 +31,8 @@ import org.apache.hadoop.yarn.api.records.timeline.TimelineEvents.EventsOfOneEnt
 import org.apache.hadoop.yarn.api.records.timeline.TimelinePutResponse;
 import org.apache.hadoop.yarn.api.records.timeline.TimelinePutResponse.TimelinePutError;
 import org.apache.hadoop.yarn.server.timeline.TimelineDataManager.CheckAcl;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 import java.io.IOException;
 import java.util.ArrayList;
@@ -71,8 +71,8 @@ abstract class KeyValueBasedTimelineStore
 
   private boolean serviceStopped = false;
 
-  private static final Log LOG
-      = LogFactory.getLog(KeyValueBasedTimelineStore.class);
+  private static final Logger LOG
+      = LoggerFactory.getLogger(KeyValueBasedTimelineStore.class);
 
   public KeyValueBasedTimelineStore() {
     super(KeyValueBasedTimelineStore.class.getName());

http://git-wip-us.apache.org/repos/asf/hadoop/blob/839e077f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/LeveldbTimelineStore.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/LeveldbTimelineStore.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/LeveldbTimelineStore.java
index ffe0413..e3db1dc 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/LeveldbTimelineStore.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/LeveldbTimelineStore.java
@@ -22,8 +22,6 @@ import com.google.common.annotations.VisibleForTesting;
 import com.google.common.base.Preconditions;
 import org.apache.commons.collections.map.LRUMap;
 import org.apache.commons.io.FileUtils;
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceAudience.Private;
 import org.apache.hadoop.classification.InterfaceStability;
@@ -48,6 +46,7 @@ import org.apache.hadoop.yarn.server.timeline.util.LeveldbUtils.KeyParser;
 import org.apache.hadoop.yarn.server.utils.LeveldbIterator;
 import org.fusesource.leveldbjni.JniDBFactory;
 import org.iq80.leveldb.*;
+import org.slf4j.LoggerFactory;
 
 import java.io.File;
 import java.io.IOException;
@@ -118,8 +117,8 @@ import static org.fusesource.leveldbjni.JniDBFactory.bytes;
 @InterfaceStability.Unstable
 public class LeveldbTimelineStore extends AbstractService
     implements TimelineStore {
-  private static final Log LOG = LogFactory
-      .getLog(LeveldbTimelineStore.class);
+  private static final org.slf4j.Logger LOG = LoggerFactory
+      .getLogger(LeveldbTimelineStore.class);
 
   @Private
   @VisibleForTesting
@@ -240,7 +239,7 @@ public class LeveldbTimelineStore extends AbstractService
         localFS.setPermission(dbPath, LEVELDB_DIR_UMASK);
       }
     } finally {
-      IOUtils.cleanup(LOG, localFS);
+      IOUtils.cleanupWithLogger(LOG, localFS);
     }
     LOG.info("Using leveldb path " + dbPath);
     try {
@@ -284,7 +283,7 @@ public class LeveldbTimelineStore extends AbstractService
             " closing db now", e);
       }
     }
-    IOUtils.cleanup(LOG, db);
+    IOUtils.cleanupWithLogger(LOG, db);
     super.serviceStop();
   }
 
@@ -320,7 +319,7 @@ public class LeveldbTimelineStore extends AbstractService
           discardOldEntities(timestamp);
           Thread.sleep(ttlInterval);
         } catch (IOException e) {
-          LOG.error(e);
+          LOG.error(e.toString());
         } catch (InterruptedException e) {
           LOG.info("Deletion thread received interrupt, exiting");
           break;
@@ -394,7 +393,7 @@ public class LeveldbTimelineStore extends AbstractService
     } catch(DBException e) {
       throw new IOException(e);            	
     } finally {
-      IOUtils.cleanup(LOG, iterator);
+      IOUtils.cleanupWithLogger(LOG, iterator);
     }
   }
 
@@ -570,7 +569,7 @@ public class LeveldbTimelineStore extends AbstractService
     } catch(DBException e) {
       throw new IOException(e);            	
     } finally {
-      IOUtils.cleanup(LOG, iterator);
+      IOUtils.cleanupWithLogger(LOG, iterator);
     }
     return events;
   }
@@ -753,7 +752,7 @@ public class LeveldbTimelineStore extends AbstractService
     } catch(DBException e) {
       throw new IOException(e);   	
     } finally {
-      IOUtils.cleanup(LOG, iterator);
+      IOUtils.cleanupWithLogger(LOG, iterator);
     }
   }
   
@@ -925,7 +924,7 @@ public class LeveldbTimelineStore extends AbstractService
     } finally {
       lock.unlock();
       writeLocks.returnLock(lock);
-      IOUtils.cleanup(LOG, writeBatch);
+      IOUtils.cleanupWithLogger(LOG, writeBatch);
     }
 
     for (EntityIdentifier relatedEntity : relatedEntitiesWithoutStartTimes) {
@@ -1376,7 +1375,7 @@ public class LeveldbTimelineStore extends AbstractService
     } catch(DBException e) {
       throw new IOException(e);            	
     } finally {
-      IOUtils.cleanup(LOG, iterator);
+      IOUtils.cleanupWithLogger(LOG, iterator);
     }
   }
 
@@ -1506,7 +1505,7 @@ public class LeveldbTimelineStore extends AbstractService
     } catch(DBException e) {
       throw new IOException(e);
     } finally {
-      IOUtils.cleanup(LOG, writeBatch);
+      IOUtils.cleanupWithLogger(LOG, writeBatch);
     }
   }
 
@@ -1548,7 +1547,7 @@ public class LeveldbTimelineStore extends AbstractService
           LOG.error("Got IOException while deleting entities for type " +
               entityType + ", continuing to next type", e);
         } finally {
-          IOUtils.cleanup(LOG, iterator, pfIterator);
+          IOUtils.cleanupWithLogger(LOG, iterator, pfIterator);
           deleteLock.writeLock().unlock();
           if (typeCount > 0) {
             LOG.info("Deleted " + typeCount + " entities of type " +
@@ -1629,7 +1628,7 @@ public class LeveldbTimelineStore extends AbstractService
       String incompatibleMessage = 
           "Incompatible version for timeline store: expecting version " 
               + getCurrentVersion() + ", but loading version " + loadedVersion;
-      LOG.fatal(incompatibleMessage);
+      LOG.error(incompatibleMessage);
       throw new IOException(incompatibleMessage);
     }
   }
@@ -1718,7 +1717,7 @@ public class LeveldbTimelineStore extends AbstractService
     } catch(DBException e) {
       throw new IOException(e);            	
     } finally {
-      IOUtils.cleanup(LOG, writeBatch);
+      IOUtils.cleanupWithLogger(LOG, writeBatch);
     }
   }
 
@@ -1755,7 +1754,7 @@ public class LeveldbTimelineStore extends AbstractService
     } catch(DBException e) {
       throw new IOException(e);            	
     } finally {
-      IOUtils.cleanup(LOG, iterator);
+      IOUtils.cleanupWithLogger(LOG, iterator);
     }
   }
 
@@ -1805,7 +1804,7 @@ public class LeveldbTimelineStore extends AbstractService
     } catch(DBException e) {
       throw new IOException(e);            	
     } finally {
-      IOUtils.cleanup(LOG, iterator);
+      IOUtils.cleanupWithLogger(LOG, iterator);
     }
   }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/839e077f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/RollingLevelDB.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/RollingLevelDB.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/RollingLevelDB.java
index 6d10671..5c511a3 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/RollingLevelDB.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/RollingLevelDB.java
@@ -33,8 +33,6 @@ import java.util.Map.Entry;
 
 import org.apache.commons.io.FilenameUtils;
 import org.apache.commons.lang.time.FastDateFormat;
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileStatus;
 import org.apache.hadoop.fs.FileSystem;
@@ -45,6 +43,8 @@ import org.fusesource.leveldbjni.JniDBFactory;
 import org.iq80.leveldb.DB;
 import org.iq80.leveldb.Options;
 import org.iq80.leveldb.WriteBatch;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 /**
  * Contains the logic to lookup a leveldb by timestamp so that multiple smaller
@@ -54,7 +54,8 @@ import org.iq80.leveldb.WriteBatch;
 class RollingLevelDB {
 
   /** Logger for this class. */
-  private static final Log LOG = LogFactory.getLog(RollingLevelDB.class);
+  private static final Logger LOG = LoggerFactory.
+      getLogger(RollingLevelDB.class);
   /** Factory to open and create new leveldb instances. */
   private static JniDBFactory factory = new JniDBFactory();
   /** Thread safe date formatter. */
@@ -151,7 +152,7 @@ class RollingLevelDB {
     }
 
     public void close() {
-      IOUtils.cleanup(LOG, writeBatch);
+      IOUtils.cleanupWithLogger(LOG, writeBatch);
     }
   }
 
@@ -346,7 +347,7 @@ class RollingLevelDB {
         .iterator();
     while (iterator.hasNext()) {
       Entry<Long, DB> entry = iterator.next();
-      IOUtils.cleanup(LOG, entry.getValue());
+      IOUtils.cleanupWithLogger(LOG, entry.getValue());
       String dbName = fdf.format(entry.getKey());
       Path path = new Path(rollingDBPath, getName() + "." + dbName);
       try {
@@ -361,9 +362,9 @@ class RollingLevelDB {
 
   public void stop() throws Exception {
     for (DB db : rollingdbs.values()) {
-      IOUtils.cleanup(LOG, db);
+      IOUtils.cleanupWithLogger(LOG, db);
     }
-    IOUtils.cleanup(LOG, lfs);
+    IOUtils.cleanupWithLogger(LOG, lfs);
   }
 
   private long computeNextCheckMillis(long now) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/839e077f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/RollingLevelDBTimelineStore.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/RollingLevelDBTimelineStore.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/RollingLevelDBTimelineStore.java
index 00f6630..1ac170c 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/RollingLevelDBTimelineStore.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/RollingLevelDBTimelineStore.java
@@ -38,8 +38,6 @@ import java.util.TreeMap;
 
 import org.apache.commons.collections.map.LRUMap;
 import org.apache.commons.lang.StringUtils;
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceAudience.Private;
 import org.apache.hadoop.classification.InterfaceStability;
@@ -76,6 +74,8 @@ import org.iq80.leveldb.ReadOptions;
 import org.iq80.leveldb.WriteBatch;
 import org.nustaq.serialization.FSTConfiguration;
 import org.nustaq.serialization.FSTClazzNameRegistry;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 import static java.nio.charset.StandardCharsets.UTF_8;
 
@@ -168,8 +168,8 @@ import static org.fusesource.leveldbjni.JniDBFactory.bytes;
 @InterfaceStability.Unstable
 public class RollingLevelDBTimelineStore extends AbstractService implements
     TimelineStore {
-  private static final Log LOG = LogFactory
-      .getLog(RollingLevelDBTimelineStore.class);
+  private static final Logger LOG = LoggerFactory
+      .getLogger(RollingLevelDBTimelineStore.class);
   private static FSTConfiguration fstConf =
       FSTConfiguration.createDefaultConfiguration();
   // Fall back to 2.24 parsing if 2.50 parsing fails
@@ -368,9 +368,9 @@ public class RollingLevelDBTimelineStore extends AbstractService implements
             + " closing db now", e);
       }
     }
-    IOUtils.cleanup(LOG, domaindb);
-    IOUtils.cleanup(LOG, starttimedb);
-    IOUtils.cleanup(LOG, ownerdb);
+    IOUtils.cleanupWithLogger(LOG, domaindb);
+    IOUtils.cleanupWithLogger(LOG, starttimedb);
+    IOUtils.cleanupWithLogger(LOG, ownerdb);
     entitydb.stop();
     indexdb.stop();
     super.serviceStop();
@@ -399,7 +399,7 @@ public class RollingLevelDBTimelineStore extends AbstractService implements
           discardOldEntities(timestamp);
           Thread.sleep(ttlInterval);
         } catch (IOException e) {
-          LOG.error(e);
+          LOG.error(e.toString());
         } catch (InterruptedException e) {
           LOG.info("Deletion thread received interrupt, exiting");
           break;
@@ -1525,7 +1525,7 @@ public class RollingLevelDBTimelineStore extends AbstractService implements
                   + ". Total start times deleted so far this cycle: "
                   + startTimesCount);
             }
-            IOUtils.cleanup(LOG, writeBatch);
+            IOUtils.cleanupWithLogger(LOG, writeBatch);
             writeBatch = starttimedb.createWriteBatch();
             batchSize = 0;
           }
@@ -1545,7 +1545,7 @@ public class RollingLevelDBTimelineStore extends AbstractService implements
       LOG.info("Deleted " + startTimesCount + "/" + totalCount
           + " start time entities earlier than " + minStartTime);
     } finally {
-      IOUtils.cleanup(LOG, writeBatch);
+      IOUtils.cleanupWithLogger(LOG, writeBatch);
     }
     return startTimesCount;
   }
@@ -1622,7 +1622,7 @@ public class RollingLevelDBTimelineStore extends AbstractService implements
       String incompatibleMessage = "Incompatible version for timeline store: "
           + "expecting version " + getCurrentVersion()
           + ", but loading version " + loadedVersion;
-      LOG.fatal(incompatibleMessage);
+      LOG.error(incompatibleMessage);
       throw new IOException(incompatibleMessage);
     }
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/839e077f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/TimelineDataManager.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/TimelineDataManager.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/TimelineDataManager.java
index 57a9346..56b71fa 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/TimelineDataManager.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/TimelineDataManager.java
@@ -26,8 +26,6 @@ import java.util.Iterator;
 import java.util.List;
 import java.util.SortedSet;
 
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.service.AbstractService;
@@ -45,6 +43,8 @@ import org.apache.hadoop.yarn.server.timeline.security.TimelineACLsManager;
 import org.apache.hadoop.yarn.webapp.BadRequestException;
 
 import com.google.common.annotations.VisibleForTesting;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 /**
  * The class wrap over the timeline store and the ACLs manager. It does some non
@@ -54,7 +54,8 @@ import com.google.common.annotations.VisibleForTesting;
  */
 public class TimelineDataManager extends AbstractService {
 
-  private static final Log LOG = LogFactory.getLog(TimelineDataManager.class);
+  private static final Logger LOG =
+          LoggerFactory.getLogger(TimelineDataManager.class);
   @VisibleForTesting
   public static final String DEFAULT_DOMAIN_ID = "DEFAULT";
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/839e077f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/recovery/LeveldbTimelineStateStore.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/recovery/LeveldbTimelineStateStore.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/recovery/LeveldbTimelineStateStore.java
index b62a541..bcd57ef 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/recovery/LeveldbTimelineStateStore.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/recovery/LeveldbTimelineStateStore.java
@@ -28,8 +28,6 @@ import java.io.File;
 import java.io.IOException;
 
 import com.google.common.annotations.VisibleForTesting;
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
@@ -50,6 +48,8 @@ import org.iq80.leveldb.DB;
 import org.iq80.leveldb.DBException;
 import org.iq80.leveldb.Options;
 import org.iq80.leveldb.WriteBatch;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 import static org.fusesource.leveldbjni.JniDBFactory.bytes;
 
@@ -60,8 +60,8 @@ import static org.fusesource.leveldbjni.JniDBFactory.bytes;
 public class LeveldbTimelineStateStore extends
     TimelineStateStore {
 
-  public static final Log LOG =
-      LogFactory.getLog(LeveldbTimelineStateStore.class);
+  public static final Logger LOG =
+      LoggerFactory.getLogger(LeveldbTimelineStateStore.class);
 
   private static final String DB_NAME = "timeline-state-store.ldb";
   private static final FsPermission LEVELDB_DIR_UMASK = FsPermission
@@ -103,7 +103,7 @@ public class LeveldbTimelineStateStore extends
         localFS.setPermission(dbPath, LEVELDB_DIR_UMASK);
       }
     } finally {
-      IOUtils.cleanup(LOG, localFS);
+      IOUtils.cleanupWithLogger(LOG, localFS);
     }
     JniDBFactory factory = new JniDBFactory();
     try {
@@ -131,7 +131,7 @@ public class LeveldbTimelineStateStore extends
 
   @Override
   protected void closeStorage() throws IOException {
-    IOUtils.cleanup(LOG, db);
+    IOUtils.cleanupWithLogger(LOG, db);
   }
 
   @Override
@@ -168,8 +168,8 @@ public class LeveldbTimelineStateStore extends
     } catch (DBException e) {
       throw new IOException(e);
     } finally {
-      IOUtils.cleanup(LOG, ds);
-      IOUtils.cleanup(LOG, batch);
+      IOUtils.cleanupWithLogger(LOG, ds);
+      IOUtils.cleanupWithLogger(LOG, batch);
     }
   }
 
@@ -239,7 +239,7 @@ public class LeveldbTimelineStateStore extends
       key.write(dataStream);
       dataStream.close();
     } finally {
-      IOUtils.cleanup(LOG, dataStream);
+      IOUtils.cleanupWithLogger(LOG, dataStream);
     }
     return memStream.toByteArray();
   }
@@ -253,7 +253,7 @@ public class LeveldbTimelineStateStore extends
     try {
       key.readFields(in);
     } finally {
-      IOUtils.cleanup(LOG, in);
+      IOUtils.cleanupWithLogger(LOG, in);
     }
     state.tokenMasterKeyState.add(key);
   }
@@ -267,7 +267,7 @@ public class LeveldbTimelineStateStore extends
     try {
       data.readFields(in);
     } finally {
-      IOUtils.cleanup(LOG, in);
+      IOUtils.cleanupWithLogger(LOG, in);
     }
     state.tokenState.put(data.getTokenIdentifier(), data.getRenewDate());
   }
@@ -290,7 +290,7 @@ public class LeveldbTimelineStateStore extends
         ++numKeys;
       }
     } finally {
-      IOUtils.cleanup(LOG, iterator);
+      IOUtils.cleanupWithLogger(LOG, iterator);
     }
     return numKeys;
   }
@@ -314,7 +314,7 @@ public class LeveldbTimelineStateStore extends
     } catch (DBException e) {
       throw new IOException(e);
     } finally {
-      IOUtils.cleanup(LOG, iterator);
+      IOUtils.cleanupWithLogger(LOG, iterator);
     }
     return numTokens;
   }
@@ -332,7 +332,7 @@ public class LeveldbTimelineStateStore extends
       try {
         state.latestSequenceNumber = in.readInt();
       } finally {
-        IOUtils.cleanup(LOG, in);
+        IOUtils.cleanupWithLogger(LOG, in);
       }
     }
   }
@@ -412,7 +412,7 @@ public class LeveldbTimelineStateStore extends
       String incompatibleMessage =
           "Incompatible version for timeline state store: expecting version "
               + getCurrentVersion() + ", but loading version " + loadedVersion;
-      LOG.fatal(incompatibleMessage);
+      LOG.error(incompatibleMessage);
       throw new IOException(incompatibleMessage);
     }
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/839e077f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/security/TimelineACLsManager.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/security/TimelineACLsManager.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/security/TimelineACLsManager.java
index 25252fc..6c32eec 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/security/TimelineACLsManager.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/security/TimelineACLsManager.java
@@ -24,8 +24,6 @@ import java.util.HashMap;
 import java.util.Map;
 
 import org.apache.commons.collections.map.LRUMap;
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.classification.InterfaceAudience.Private;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.security.UserGroupInformation;
@@ -41,6 +39,8 @@ import org.apache.hadoop.yarn.server.timeline.TimelineStore;
 import org.apache.hadoop.yarn.util.StringHelper;
 
 import com.google.common.annotations.VisibleForTesting;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 /**
  * <code>TimelineACLsManager</code> check the entity level timeline data access.
@@ -48,7 +48,8 @@ import com.google.common.annotations.VisibleForTesting;
 @Private
 public class TimelineACLsManager {
 
-  private static final Log LOG = LogFactory.getLog(TimelineACLsManager.class);
+  private static final Logger LOG = LoggerFactory.
+      getLogger(TimelineACLsManager.class);
   private static final int DOMAIN_ACCESS_ENTRY_CACHE_SIZE = 100;
 
   private AdminACLsManager adminAclsManager;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/839e077f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/security/TimelineDelegationTokenSecretManagerService.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/security/TimelineDelegationTokenSecretManagerService.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/security/TimelineDelegationTokenSecretManagerService.java
index 60a0348..0c6892a 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/security/TimelineDelegationTokenSecretManagerService.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/security/TimelineDelegationTokenSecretManagerService.java
@@ -21,8 +21,6 @@ package org.apache.hadoop.yarn.server.timeline.security;
 import java.io.IOException;
 import java.util.Map.Entry;
 
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.classification.InterfaceAudience.Private;
 import org.apache.hadoop.classification.InterfaceStability.Unstable;
 import org.apache.hadoop.conf.Configuration;
@@ -35,6 +33,8 @@ import org.apache.hadoop.yarn.security.client.TimelineDelegationTokenIdentifier;
 import org.apache.hadoop.yarn.server.timeline.recovery.LeveldbTimelineStateStore;
 import org.apache.hadoop.yarn.server.timeline.recovery.TimelineStateStore;
 import org.apache.hadoop.yarn.server.timeline.recovery.TimelineStateStore.TimelineServiceState;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 /**
  * The service wrapper of {@link TimelineDelegationTokenSecretManager}
@@ -118,8 +118,8 @@ public class TimelineDelegationTokenSecretManagerService extends
   public static class TimelineDelegationTokenSecretManager extends
       AbstractDelegationTokenSecretManager<TimelineDelegationTokenIdentifier> {
 
-    public static final Log LOG =
-        LogFactory.getLog(TimelineDelegationTokenSecretManager.class);
+    public static final Logger LOG =
+        LoggerFactory.getLogger(TimelineDelegationTokenSecretManager.class);
 
     private TimelineStateStore stateStore;
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/839e077f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/webapp/TimelineWebServices.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/webapp/TimelineWebServices.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/webapp/TimelineWebServices.java
index ad4e2bb..be8e3c5 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/webapp/TimelineWebServices.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/webapp/TimelineWebServices.java
@@ -43,8 +43,6 @@ import javax.ws.rs.core.Context;
 import javax.ws.rs.core.MediaType;
 import javax.ws.rs.core.Response;
 
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.http.JettyUtils;
 import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.util.StringUtils;
@@ -68,13 +66,16 @@ import org.apache.hadoop.yarn.webapp.NotFoundException;
 
 import com.google.inject.Inject;
 import com.google.inject.Singleton;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 @Singleton
 @Path("/ws/v1/timeline")
 //TODO: support XML serialization/deserialization
 public class TimelineWebServices {
 
-  private static final Log LOG = LogFactory.getLog(TimelineWebServices.class);
+  private static final Logger LOG = LoggerFactory
+      .getLogger(TimelineWebServices.class);
 
   private TimelineDataManager timelineDataManager;
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/839e077f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestFileSystemApplicationHistoryStore.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestFileSystemApplicationHistoryStore.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestFileSystemApplicationHistoryStore.java
index 15a00d2..df4adbe 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestFileSystemApplicationHistoryStore.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestFileSystemApplicationHistoryStore.java
@@ -32,8 +32,6 @@ import static org.mockito.Mockito.spy;
 import static org.mockito.Mockito.times;
 import static org.mockito.Mockito.verify;
 
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileStatus;
 import org.apache.hadoop.fs.FileSystem;
@@ -51,12 +49,14 @@ import org.junit.After;
 import org.junit.Before;
 import org.junit.Test;
 import org.mockito.Mockito;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 public class TestFileSystemApplicationHistoryStore extends
     ApplicationHistoryStoreTestUtils {
 
-  private static Log LOG = LogFactory
-    .getLog(TestFileSystemApplicationHistoryStore.class.getName());
+  private static final Logger LOG = LoggerFactory
+      .getLogger(TestFileSystemApplicationHistoryStore.class.getName());
 
   private FileSystem fs;
   private Path fsWorkingPath;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/839e077f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/timeline/TestLeveldbTimelineStore.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/timeline/TestLeveldbTimelineStore.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/timeline/TestLeveldbTimelineStore.java
index 0c292d8..f68a1c4 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/timeline/TestLeveldbTimelineStore.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/timeline/TestLeveldbTimelineStore.java
@@ -160,7 +160,7 @@ public class TestLeveldbTimelineStore extends TimelineStoreTestUtils {
     } catch(DBException e) {
       throw new IOException(e);
     } finally {
-      IOUtils.cleanup(null, iterator, pfIterator);
+      IOUtils.cleanupWithLogger(null, iterator, pfIterator);
     }
   }
 


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[38/50] [abbrv] hadoop git commit: HDFS-12278. LeaseManager operations are inefficient in 2.8. Contributed by Rushabh S Shah.

Posted by wa...@apache.org.
HDFS-12278. LeaseManager operations are inefficient in 2.8. Contributed by Rushabh S Shah.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b5c02f95
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b5c02f95
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b5c02f95

Branch: refs/heads/YARN-5881
Commit: b5c02f95b5a2fcb8931d4a86f8192caa18009ea9
Parents: ec69414
Author: Kihwal Lee <ki...@apache.org>
Authored: Wed Aug 9 16:46:05 2017 -0500
Committer: Kihwal Lee <ki...@apache.org>
Committed: Wed Aug 9 16:46:05 2017 -0500

----------------------------------------------------------------------
 .../hadoop/hdfs/server/namenode/LeaseManager.java | 18 ++++++++++++------
 1 file changed, 12 insertions(+), 6 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b5c02f95/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/LeaseManager.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/LeaseManager.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/LeaseManager.java
index 6578ba9..35ec063 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/LeaseManager.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/LeaseManager.java
@@ -26,10 +26,11 @@ import java.util.Collections;
 import java.util.Comparator;
 import java.util.HashSet;
 import java.util.List;
-import java.util.PriorityQueue;
+import java.util.NavigableSet;
 import java.util.Set;
 import java.util.SortedMap;
 import java.util.TreeMap;
+import java.util.TreeSet;
 import java.util.concurrent.Callable;
 import java.util.concurrent.ExecutorService;
 import java.util.concurrent.Executors;
@@ -87,11 +88,15 @@ public class LeaseManager {
   // Mapping: leaseHolder -> Lease
   private final SortedMap<String, Lease> leases = new TreeMap<>();
   // Set of: Lease
-  private final PriorityQueue<Lease> sortedLeases = new PriorityQueue<>(512,
+  private final NavigableSet<Lease> sortedLeases = new TreeSet<>(
       new Comparator<Lease>() {
         @Override
         public int compare(Lease o1, Lease o2) {
-          return Long.signum(o1.getLastUpdate() - o2.getLastUpdate());
+          if (o1.getLastUpdate() != o2.getLastUpdate()) {
+            return Long.signum(o1.getLastUpdate() - o2.getLastUpdate());
+          } else {
+            return o1.holder.compareTo(o2.holder);
+          }
         }
   });
   // INodeID -> Lease
@@ -528,9 +533,10 @@ public class LeaseManager {
 
     long start = monotonicNow();
 
-    while(!sortedLeases.isEmpty() && sortedLeases.peek().expiredHardLimit()
-      && !isMaxLockHoldToReleaseLease(start)) {
-      Lease leaseToCheck = sortedLeases.peek();
+    while(!sortedLeases.isEmpty() &&
+        sortedLeases.first().expiredHardLimit()
+        && !isMaxLockHoldToReleaseLease(start)) {
+      Lease leaseToCheck = sortedLeases.first();
       LOG.info(leaseToCheck + " has expired hard limit");
 
       final List<Long> removing = new ArrayList<>();


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[22/50] [abbrv] hadoop git commit: YARN-6726. Fix issues with docker commands executed by container-executor. (Shane Kumpf via wangda)

Posted by wa...@apache.org.
YARN-6726. Fix issues with docker commands executed by container-executor. (Shane Kumpf via wangda)

Change-Id: If1b1827345f98f0a49cc7e39d1ba41fbeed5e911


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1794de3e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1794de3e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1794de3e

Branch: refs/heads/YARN-5881
Commit: 1794de3ea4bbd6863fb43dbae9f5a46b6e4230a0
Parents: 735fce5
Author: Wangda Tan <wa...@apache.org>
Authored: Tue Aug 8 12:56:29 2017 -0700
Committer: Wangda Tan <wa...@apache.org>
Committed: Tue Aug 8 12:56:29 2017 -0700

----------------------------------------------------------------------
 .../src/CMakeLists.txt                          |   1 +
 .../impl/container-executor.c                   |  78 +++++++++++-
 .../impl/container-executor.h                   |  17 ++-
 .../impl/utils/string-utils.c                   |  86 ++++++++++++++
 .../impl/utils/string-utils.h                   |  32 +++++
 .../test/test-container-executor.c              | 119 ++++++++++++++++++-
 6 files changed, 327 insertions(+), 6 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1794de3e/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/CMakeLists.txt
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/CMakeLists.txt b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/CMakeLists.txt
index f7fe83d..5b52536 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/CMakeLists.txt
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/CMakeLists.txt
@@ -89,6 +89,7 @@ add_library(container
     main/native/container-executor/impl/configuration.c
     main/native/container-executor/impl/container-executor.c
     main/native/container-executor/impl/get_executable.c
+    main/native/container-executor/impl/utils/string-utils.c
 )
 
 add_executable(container-executor

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1794de3e/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
index 99f7b56..def628e 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
@@ -18,6 +18,7 @@
 
 #include "configuration.h"
 #include "container-executor.h"
+#include "utils/string-utils.h"
 
 #include <inttypes.h>
 #include <libgen.h>
@@ -40,6 +41,7 @@
 #include <sys/mount.h>
 #include <sys/wait.h>
 #include <getopt.h>
+#include <regex.h>
 
 #include "config.h"
 
@@ -79,6 +81,11 @@ static const char* TC_READ_STATS_OPTS [] = { "-s",  "-b", NULL};
 //struct to store the user details
 struct passwd *user_detail = NULL;
 
+//Docker container related constants.
+static const char* DOCKER_CONTAINER_NAME_PREFIX = "container_";
+static const char* DOCKER_CLIENT_CONFIG_ARG = "--config=";
+static const char* DOCKER_PULL_COMMAND = "pull";
+
 FILE* LOGFILE = NULL;
 FILE* ERRORFILE = NULL;
 
@@ -1208,6 +1215,27 @@ char** tokenize_docker_command(const char *input, int *split_counter) {
   return linesplit;
 }
 
+int execute_regex_match(const char *regex_str, const char *input) {
+  regex_t regex;
+  int regex_match;
+  if (0 != regcomp(&regex, regex_str, REG_EXTENDED|REG_NOSUB)) {
+    fprintf(LOGFILE, "Unable to compile regex.");
+    fflush(LOGFILE);
+    exit(ERROR_COMPILING_REGEX);
+  }
+  regex_match = regexec(&regex, input, (size_t) 0, NULL, 0);
+  regfree(&regex);
+  if(0 == regex_match) {
+    return 0;
+  }
+  return 1;
+}
+
+int validate_docker_image_name(const char *image_name) {
+  char *regex_str = "^(([a-zA-Z0-9.-]+)(:[0-9]+)?/)?([a-z0-9_./-]+)(:[a-zA-Z0-9_.-]+)?$";
+  return execute_regex_match(regex_str, image_name);
+}
+
 char* sanitize_docker_command(const char *line) {
   static struct option long_options[] = {
     {"name", required_argument, 0, 'n' },
@@ -1222,6 +1250,7 @@ char* sanitize_docker_command(const char *line) {
     {"cap-drop", required_argument, 0, 'o' },
     {"device", required_argument, 0, 'i' },
     {"detach", required_argument, 0, 't' },
+    {"format", required_argument, 0, 'f' },
     {0, 0, 0, 0}
   };
 
@@ -1240,6 +1269,35 @@ char* sanitize_docker_command(const char *line) {
   if(output == NULL) {
     exit(OUT_OF_MEMORY);
   }
+
+  // Handle docker client config option.
+  if(0 == strncmp(linesplit[0], DOCKER_CLIENT_CONFIG_ARG, strlen(DOCKER_CLIENT_CONFIG_ARG))) {
+    strcat(output, linesplit[0]);
+    strcat(output, " ");
+    long index = 0;
+    while(index < split_counter) {
+      linesplit[index] = linesplit[index + 1];
+      if (linesplit[index] == NULL) {
+        split_counter--;
+        break;
+      }
+      index++;
+    }
+  }
+
+  // Handle docker pull and image name validation.
+  if (0 == strncmp(linesplit[0], DOCKER_PULL_COMMAND, strlen(DOCKER_PULL_COMMAND))) {
+    if (0 != validate_docker_image_name(linesplit[1])) {
+      fprintf(ERRORFILE, "Invalid Docker image name, exiting.");
+      fflush(ERRORFILE);
+      exit(DOCKER_IMAGE_INVALID);
+    }
+    strcat(output, linesplit[0]);
+    strcat(output, " ");
+    strcat(output, linesplit[1]);
+    return output;
+  }
+
   strcat(output, linesplit[0]);
   strcat(output, " ");
   optind = 1;
@@ -1287,6 +1345,11 @@ char* sanitize_docker_command(const char *line) {
       case 't':
         quote_and_append_arg(&output, &output_size, "--detach=", optarg);
         break;
+      case 'f':
+        strcat(output, "--format=");
+        strcat(output, optarg);
+        strcat(output, " ");
+        break;
       default:
         fprintf(LOGFILE, "Unknown option in docker command, character %d %c, optionindex = %d\n", c, c, optind);
         fflush(LOGFILE);
@@ -1297,7 +1360,16 @@ char* sanitize_docker_command(const char *line) {
 
   if(optind < split_counter) {
     while(optind < split_counter) {
-      quote_and_append_arg(&output, &output_size, "", linesplit[optind++]);
+      if (0 == strncmp(linesplit[optind], DOCKER_CONTAINER_NAME_PREFIX, strlen(DOCKER_CONTAINER_NAME_PREFIX))) {
+        if (1 != validate_container_id(linesplit[optind])) {
+          fprintf(ERRORFILE, "Specified container_id=%s is invalid\n", linesplit[optind]);
+          fflush(ERRORFILE);
+          exit(DOCKER_CONTAINER_NAME_INVALID);
+        }
+        strcat(output, linesplit[optind++]);
+      } else {
+        quote_and_append_arg(&output, &output_size, "", linesplit[optind++]);
+      }
     }
   }
 
@@ -1328,8 +1400,8 @@ char* parse_docker_command_file(const char* command_file) {
   if(ret == NULL) {
     exit(ERROR_SANITIZING_DOCKER_COMMAND);
   }
-  fprintf(LOGFILE, "Using command %s\n", ret);
-  fflush(LOGFILE);
+  fprintf(ERRORFILE, "Using command %s\n", ret);
+  fflush(ERRORFILE);
 
   return ret;
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1794de3e/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.h
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.h b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.h
index e40bd90..1dc0491 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.h
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.h
@@ -74,7 +74,10 @@ enum errorcodes {
   COULD_NOT_CREATE_APP_LOG_DIRECTORIES = 36,
   COULD_NOT_CREATE_TMP_DIRECTORIES = 37,
   ERROR_CREATE_CONTAINER_DIRECTORIES_ARGUMENTS = 38,
-  ERROR_SANITIZING_DOCKER_COMMAND = 39
+  ERROR_SANITIZING_DOCKER_COMMAND = 39,
+  DOCKER_IMAGE_INVALID = 40,
+  DOCKER_CONTAINER_NAME_INVALID = 41,
+  ERROR_COMPILING_REGEX = 42
 };
 
 enum operations {
@@ -309,3 +312,15 @@ int run_docker(const char *command_file);
  * Sanitize docker commands. Returns NULL if there was any failure.
 */
 char* sanitize_docker_command(const char *line);
+
+/*
+ * Compile the regex_str and determine if the input string matches.
+ * Return 0 on match, 1 of non-match.
+ */
+int execute_regex_match(const char *regex_str, const char *input);
+
+/**
+ * Validate the docker image name matches the expected input.
+ * Return 0 on success.
+ */
+int validate_docker_image_name(const char *image_name);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1794de3e/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/utils/string-utils.c
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/utils/string-utils.c b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/utils/string-utils.c
new file mode 100644
index 0000000..703d484
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/utils/string-utils.c
@@ -0,0 +1,86 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include <strings.h>
+#include <string.h>
+#include <stdio.h>
+#include <stdlib.h>
+
+/*
+ * if all chars in the input str are numbers
+ * return true/false
+ */
+static int all_numbers(char* input) {
+  if (0 == strlen(input)) {
+    return 0;
+  }
+
+  for (int i = 0; i < strlen(input); i++) {
+    if (input[i] < '0' || input[i] > '9') {
+      return 0;
+    }
+  }
+  return 1;
+}
+
+int validate_container_id(const char* input) {
+  /*
+   * Two different forms of container_id
+   * container_e17_1410901177871_0001_01_000005
+   * container_1410901177871_0001_01_000005
+   */
+  char* input_cpy = malloc(strlen(input));
+  strcpy(input_cpy, input);
+  char* p = strtok(input_cpy, "_");
+  int idx = 0;
+  while (p != NULL) {
+    if (0 == idx) {
+      if (0 != strcmp("container", p)) {
+        return 0;
+      }
+    } else if (1 == idx) {
+      // this could be e[n][n], or [n][n]...
+      if (!all_numbers(p)) {
+        if (strlen(p) == 0) {
+          return 0;
+        }
+        if (p[0] != 'e') {
+          return 0;
+        }
+        if (!all_numbers(p + 1)) {
+          return 0;
+        }
+      }
+    } else {
+      // otherwise, should be all numbers
+      if (!all_numbers(p)) {
+        return 0;
+      }
+    }
+
+    p = strtok(NULL, "_");
+    idx++;
+  }
+  free(input_cpy);
+
+  // We should have [5,6] elements split by '_'
+  if (idx > 6 || idx < 5) {
+    return 0;
+  }
+  return 1;
+}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1794de3e/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/utils/string-utils.h
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/utils/string-utils.h b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/utils/string-utils.h
new file mode 100644
index 0000000..0a41ad1
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/utils/string-utils.h
@@ -0,0 +1,32 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#ifdef __FreeBSD__
+#define _WITH_GETLINE
+#endif
+
+#ifndef _UTILS_STRING_UTILS_H_
+#define _UTILS_STRING_UTILS_H_
+
+/*
+ * Get numbers split by comma from a input string
+ * return false/true
+ */
+int validate_container_id(const char* input);
+
+#endif
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1794de3e/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test-container-executor.c
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test-container-executor.c b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test-container-executor.c
index cf5f119..3202652 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test-container-executor.c
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test-container-executor.c
@@ -17,6 +17,7 @@
  */
 #include "configuration.h"
 #include "container-executor.h"
+#include "utils/string-utils.h"
 
 #include <inttypes.h>
 #include <errno.h>
@@ -1176,7 +1177,13 @@ void test_sanitize_docker_command() {
     "run --name=$CID --user=nobody -d --workdir=/yarn/local/cdir --privileged --rm --device=/sys/fs/cgroup/device:/sys/fs/cgroup/device --detach=true --cgroup-parent=/sys/fs/cgroup/cpu/yarn/cid --net=host --hostname=test.host.name --cap-drop=ALL --cap-add=SYS_CHROOT --cap-add=MKNOD --cap-add=SETFCAP --cap-add=SETPCAP --cap-add=FSETID --cap-add=CHOWN --cap-add=AUDIT_WRITE --cap-add=SETGID --cap-add=NET_RAW --cap-add=FOWNER --cap-add=SETUID --cap-add=DAC_OVERRIDE --cap-add=KILL --cap-add=NET_BIND_SERVICE -v /sys/fs/cgroup:/sys/fs/cgroup:ro -v /yarn/local/cdir:/yarn/local/cdir -v /yarn/local/usercache/test/:/yarn/local/usercache/test/ ubuntu bash /yarn/local/usercache/test/appcache/aid/cid/launch_container.sh",
     "run --name=cname --user=nobody -d --workdir=/yarn/local/cdir --privileged --rm --device=/sys/fs/cgroup/device:/sys/fs/cgroup/device --detach=true --cgroup-parent=/sys/fs/cgroup/cpu/yarn/cid --net=host --hostname=test.host.name --cap-drop=ALL --cap-add=SYS_CHROOT --cap-add=MKNOD --cap-add=SETFCAP --cap-add=SETPCAP --cap-add=FSETID --cap-add=CHOWN --cap-add=AUDIT_WRITE --cap-add=SETGID --cap-add=NET_RAW --cap-add=FOWNER --cap-add=SETUID --cap-add=DAC_OVERRIDE --cap-add=KILL --cap-add=NET_BIND_SERVICE -v /sys/fs/cgroup:/sys/fs/cgroup:ro -v /yarn/local/cdir:/yarn/local/cdir -v /yarn/local/usercache/test/:/yarn/local/usercache/test/ ubuntu || touch /tmp/file # bash /yarn/local/usercache/test/appcache/aid/cid/launch_container.sh",
     "run --name=cname --user=nobody -d --workdir=/yarn/local/cdir --privileged --rm --device=/sys/fs/cgroup/device:/sys/fs/cgroup/device --detach=true --cgroup-parent=/sys/fs/cgroup/cpu/yarn/cid --net=host --hostname=test.host.name --cap-drop=ALL --cap-add=SYS_CHROOT --cap-add=MKNOD --cap-add=SETFCAP --cap-add=SETPCAP --cap-add=FSETID --cap-add=CHOWN --cap-add=AUDIT_WRITE --cap-add=SETGID --cap-add=NET_RAW --cap-add=FOWNER --cap-add=SETUID --cap-add=DAC_OVERRIDE --cap-add=KILL --cap-add=NET_BIND_SERVICE -v /sys/fs/cgroup:/sys/fs/cgroup:ro -v /yarn/local/cdir:/yarn/local/cdir -v /yarn/local/usercache/test/:/yarn/local/usercache/test/ ubuntu' || touch /tmp/file # bash /yarn/local/usercache/test/appcache/aid/cid/launch_container.sh",
-    "run ''''''''"
+    "run ''''''''",
+    "inspect --format='{{range(.NetworkSettings.Networks)}}{{.IPAddress}},{{end}}{{.Config.Hostname}}' container_e111_1111111111111_1111_01_111111",
+    "rm container_e111_1111111111111_1111_01_111111",
+    "stop container_e111_1111111111111_1111_01_111111",
+    "pull ubuntu",
+    "pull registry.com/user/ubuntu",
+    "--config=/yarn/local/cdir/ pull registry.com/user/ubuntu"
   };
   char *expected_output[] = {
       "run --name='cname' --user='nobody' -d --workdir='/yarn/local/cdir' --privileged --rm --device='/sys/fs/cgroup/device:/sys/fs/cgroup/device' --detach='true' --cgroup-parent='/sys/fs/cgroup/cpu/yarn/cid' --net='host' --hostname='test.host.name' --cap-drop='ALL' --cap-add='SYS_CHROOT' --cap-add='MKNOD' --cap-add='SETFCAP' --cap-add='SETPCAP' --cap-add='FSETID' --cap-add='CHOWN' --cap-add='AUDIT_WRITE' --cap-add='SETGID' --cap-add='NET_RAW' --cap-add='FOWNER' --cap-add='SETUID' --cap-add='DAC_OVERRIDE' --cap-add='KILL' --cap-add='NET_BIND_SERVICE' -v '/sys/fs/cgroup:/sys/fs/cgroup:ro' -v '/yarn/local/cdir:/yarn/local/cdir' -v '/yarn/local/usercache/test/:/yarn/local/usercache/test/' 'ubuntu' 'bash' '/yarn/local/usercache/test/appcache/aid/cid/launch_container.sh' ",
@@ -1184,12 +1191,18 @@ void test_sanitize_docker_command() {
       "run --name='cname' --user='nobody' -d --workdir='/yarn/local/cdir' --privileged --rm --device='/sys/fs/cgroup/device:/sys/fs/cgroup/device' --detach='true' --cgroup-parent='/sys/fs/cgroup/cpu/yarn/cid' --net='host' --hostname='test.host.name' --cap-drop='ALL' --cap-add='SYS_CHROOT' --cap-add='MKNOD' --cap-add='SETFCAP' --cap-add='SETPCAP' --cap-add='FSETID' --cap-add='CHOWN' --cap-add='AUDIT_WRITE' --cap-add='SETGID' --cap-add='NET_RAW' --cap-add='FOWNER' --cap-add='SETUID' --cap-add='DAC_OVERRIDE' --cap-add='KILL' --cap-add='NET_BIND_SERVICE' -v '/sys/fs/cgroup:/sys/fs/cgroup:ro' -v '/yarn/local/cdir:/yarn/local/cdir' -v '/yarn/local/usercache/test/:/yarn/local/usercache/test/' 'ubuntu' '||' 'touch' '/tmp/file' '#' 'bash' '/yarn/local/usercache/test/appcache/aid/cid/launch_container.sh' ",
       "run --name='cname' --user='nobody' -d --workdir='/yarn/local/cdir' --privileged --rm --device='/sys/fs/cgroup/device:/sys/fs/cgroup/device' --detach='true' --cgroup-parent='/sys/fs/cgroup/cpu/yarn/cid' --net='host' --hostname='test.host.name' --cap-drop='ALL' --cap-add='SYS_CHROOT' --cap-add='MKNOD' --cap-add='SETFCAP' --cap-add='SETPCAP' --cap-add='FSETID' --cap-add='CHOWN' --cap-add='AUDIT_WRITE' --cap-add='SETGID' --cap-add='NET_RAW' --cap-add='FOWNER' --cap-add='SETUID' --cap-add='DAC_OVERRIDE' --cap-add='KILL' --cap-add='NET_BIND_SERVICE' -v '/sys/fs/cgroup:/sys/fs/cgroup:ro' -v '/yarn/local/cdir:/yarn/local/cdir' -v '/yarn/local/usercache/test/:/yarn/local/usercache/test/' 'ubuntu'\"'\"'' '||' 'touch' '/tmp/file' '#' 'bash' '/yarn/local/usercache/test/appcache/aid/cid/launch_container.sh' ",
       "run ''\"'\"''\"'\"''\"'\"''\"'\"''\"'\"''\"'\"''\"'\"''\"'\"'' ",
+      "inspect --format='{{range(.NetworkSettings.Networks)}}{{.IPAddress}},{{end}}{{.Config.Hostname}}' container_e111_1111111111111_1111_01_111111",
+      "rm container_e111_1111111111111_1111_01_111111",
+      "stop container_e111_1111111111111_1111_01_111111",
+      "pull ubuntu",
+      "pull registry.com/user/ubuntu",
+      "--config=/yarn/local/cdir/ pull registry.com/user/ubuntu"
   };
 
   int input_size = sizeof(input) / sizeof(char *);
   int i = 0;
   for(i = 0;  i < input_size; i++) {
-    char *command = (char *) calloc(strlen(input[i]), sizeof(char));
+    char *command = (char *) calloc(strlen(input[i]) + 1 , sizeof(char));
     strncpy(command, input[i], strlen(input[i]));
     char *op = sanitize_docker_command(command);
     if(strncmp(expected_output[i], op, strlen(expected_output[i])) != 0) {
@@ -1200,6 +1213,102 @@ void test_sanitize_docker_command() {
   }
 }
 
+void test_validate_docker_image_name() {
+
+  char *good_input[] = {
+    "ubuntu",
+    "ubuntu:latest",
+    "ubuntu:14.04",
+    "ubuntu:LATEST",
+    "registry.com:5000/user/ubuntu",
+    "registry.com:5000/user/ubuntu:latest",
+    "registry.com:5000/user/ubuntu:0.1.2.3",
+    "registry.com/user/ubuntu",
+    "registry.com/user/ubuntu:latest",
+    "registry.com/user/ubuntu:0.1.2.3",
+    "registry.com/user/ubuntu:test-image",
+    "registry.com/user/ubuntu:test_image",
+    "registry.com/ubuntu",
+    "user/ubuntu",
+    "user/ubuntu:0.1.2.3",
+    "user/ubuntu:latest",
+    "user/ubuntu:test_image",
+    "user/ubuntu.test:test_image",
+    "user/ubuntu-test:test-image",
+    "registry.com/ubuntu/ubuntu/ubuntu"
+  };
+
+  char *bad_input[] = {
+    "UBUNTU",
+    "registry.com|5000/user/ubuntu",
+    "registry.com | 5000/user/ubuntu",
+    "ubuntu' || touch /tmp/file #",
+    "ubuntu || touch /tmp/file #",
+    "''''''''",
+    "bad_host_name:5000/user/ubuntu",
+    "registry.com:foo/ubuntu/ubuntu/ubuntu",
+    "registry.com/ubuntu:foo/ubuntu/ubuntu"
+  };
+
+  int good_input_size = sizeof(good_input) / sizeof(char *);
+  int i = 0;
+  for(i = 0; i < good_input_size; i++) {
+    int op = validate_docker_image_name(good_input[i]);
+    if(0 != op) {
+      printf("\nFAIL: docker image name %s is invalid", good_input[i]);
+      exit(1);
+    }
+  }
+
+  int bad_input_size = sizeof(bad_input) / sizeof(char *);
+  int j = 0;
+  for(j = 0; j < bad_input_size; j++) {
+    int op = validate_docker_image_name(bad_input[j]);
+    if(1 != op) {
+      printf("\nFAIL: docker image name %s is valid, expected invalid", bad_input[j]);
+      exit(1);
+    }
+  }
+}
+
+void test_validate_container_id() {
+  char *good_input[] = {
+    "container_e134_1499953498516_50875_01_000007",
+    "container_1499953498516_50875_01_000007",
+    "container_e1_12312_11111_02_000001"
+  };
+
+  char *bad_input[] = {
+    "CONTAINER",
+    "container_e1_12312_11111_02_000001 | /tmp/file"
+    "container_e1_12312_11111_02_000001 || # /tmp/file",
+    "container_e1_12312_11111_02_000001 # /tmp/file",
+    "container_e1_12312_11111_02_000001' || touch /tmp/file #",
+    "ubuntu || touch /tmp/file #",
+    "''''''''"
+  };
+
+  int good_input_size = sizeof(good_input) / sizeof(char *);
+  int i = 0;
+  for(i = 0; i < good_input_size; i++) {
+    int op = validate_container_id(good_input[i]);
+    if(1 != op) {
+      printf("FAIL: docker container name %s is invalid\n", good_input[i]);
+      exit(1);
+    }
+  }
+
+  int bad_input_size = sizeof(bad_input) / sizeof(char *);
+  int j = 0;
+  for(j = 0; j < bad_input_size; j++) {
+    int op = validate_container_id(bad_input[j]);
+    if(0 != op) {
+      printf("FAIL: docker container name %s is valid, expected invalid\n", bad_input[j]);
+      exit(1);
+    }
+  }
+}
+
 // This test is expected to be executed either by a regular
 // user or by root. If executed by a regular user it doesn't
 // test all the functions that would depend on changing the
@@ -1297,6 +1406,12 @@ int main(int argc, char **argv) {
   printf("\nTesting sanitize docker commands()\n");
   test_sanitize_docker_command();
 
+  printf("\nTesting validate_docker_image_name()\n");
+  test_validate_docker_image_name();
+
+  printf("\nTesting validate_container_id()\n");
+  test_validate_container_id();
+
   test_check_user(0);
 
 #ifdef __APPLE__


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[33/50] [abbrv] hadoop git commit: YARN-6515. Fix warnings from Spotbugs in hadoop-yarn-server-nodemanager. Contributed by Naganarasimha G R.

Posted by wa...@apache.org.
YARN-6515. Fix warnings from Spotbugs in hadoop-yarn-server-nodemanager. Contributed by Naganarasimha G R.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1a18d5e5
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1a18d5e5
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1a18d5e5

Branch: refs/heads/YARN-5881
Commit: 1a18d5e514d13aa3a88e9b6089394a27296d6bc3
Parents: 8a4bff0
Author: Akira Ajisaka <aa...@apache.org>
Authored: Wed Aug 9 21:56:34 2017 +0900
Committer: Akira Ajisaka <aa...@apache.org>
Committed: Wed Aug 9 21:56:43 2017 +0900

----------------------------------------------------------------------
 .../server/nodemanager/NodeStatusUpdaterImpl.java    | 11 +++++------
 .../localizer/ContainerLocalizer.java                | 15 ++++++++-------
 .../containermanager/monitor/ContainerMetrics.java   |  2 +-
 3 files changed, 14 insertions(+), 14 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1a18d5e5/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeStatusUpdaterImpl.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeStatusUpdaterImpl.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeStatusUpdaterImpl.java
index 00073d8..b5ec383 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeStatusUpdaterImpl.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeStatusUpdaterImpl.java
@@ -639,7 +639,6 @@ public class NodeStatusUpdaterImpl extends AbstractService implements
   public void removeOrTrackCompletedContainersFromContext(
       List<ContainerId> containerIds) throws IOException {
     Set<ContainerId> removedContainers = new HashSet<ContainerId>();
-    Set<ContainerId> removedNullContainers = new HashSet<ContainerId>();
 
     pendingContainersToRemove.addAll(containerIds);
     Iterator<ContainerId> iter = pendingContainersToRemove.iterator();
@@ -649,7 +648,6 @@ public class NodeStatusUpdaterImpl extends AbstractService implements
       Container nmContainer = context.getContainers().get(containerId);
       if (nmContainer == null) {
         iter.remove();
-        removedNullContainers.add(containerId);
       } else if (nmContainer.getContainerState().equals(
         org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerState.DONE)) {
         context.getContainers().remove(containerId);
@@ -712,11 +710,12 @@ public class NodeStatusUpdaterImpl extends AbstractService implements
   public void removeVeryOldStoppedContainersFromCache() {
     synchronized (recentlyStoppedContainers) {
       long currentTime = System.currentTimeMillis();
-      Iterator<ContainerId> i =
-          recentlyStoppedContainers.keySet().iterator();
+      Iterator<Entry<ContainerId, Long>> i =
+          recentlyStoppedContainers.entrySet().iterator();
       while (i.hasNext()) {
-        ContainerId cid = i.next();
-        if (recentlyStoppedContainers.get(cid) < currentTime) {
+        Entry<ContainerId, Long> mapEntry = i.next();
+        ContainerId cid = mapEntry.getKey();
+        if (mapEntry.getValue() < currentTime) {
           if (!context.getContainers().containsKey(cid)) {
             i.remove();
             try {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1a18d5e5/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/ContainerLocalizer.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/ContainerLocalizer.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/ContainerLocalizer.java
index 8a46491..bb4b7f3 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/ContainerLocalizer.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/ContainerLocalizer.java
@@ -17,6 +17,8 @@
 */
 package org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer;
 
+import static org.apache.hadoop.util.Shell.getAllShells;
+
 import java.io.DataInputStream;
 import java.io.File;
 import java.io.IOException;
@@ -30,6 +32,7 @@ import java.util.HashSet;
 import java.util.Iterator;
 import java.util.List;
 import java.util.Map;
+import java.util.Map.Entry;
 import java.util.Set;
 import java.util.Stack;
 import java.util.concurrent.Callable;
@@ -81,8 +84,6 @@ import org.apache.hadoop.yarn.util.FSDownload;
 import com.google.common.annotations.VisibleForTesting;
 import com.google.common.util.concurrent.ThreadFactoryBuilder;
 
-import static org.apache.hadoop.util.Shell.getAllShells;
-
 public class ContainerLocalizer {
 
   static final Log LOG = LogFactory.getLog(ContainerLocalizer.class);
@@ -348,13 +349,13 @@ public class ContainerLocalizer {
     final List<LocalResourceStatus> currentResources =
       new ArrayList<LocalResourceStatus>();
     // TODO: Synchronization??
-    for (Iterator<LocalResource> i = pendingResources.keySet().iterator();
-         i.hasNext();) {
-      LocalResource rsrc = i.next();
+    for (Iterator<Entry<LocalResource, Future<Path>>> i =
+        pendingResources.entrySet().iterator(); i.hasNext();) {
+      Entry<LocalResource, Future<Path>> mapEntry = i.next();
       LocalResourceStatus stat =
         recordFactory.newRecordInstance(LocalResourceStatus.class);
-      stat.setResource(rsrc);
-      Future<Path> fPath = pendingResources.get(rsrc);
+      stat.setResource(mapEntry.getKey());
+      Future<Path> fPath = mapEntry.getValue();
       if (fPath.isDone()) {
         try {
           Path localPath = fPath.get();

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1a18d5e5/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/ContainerMetrics.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/ContainerMetrics.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/ContainerMetrics.java
index 07b3dea..a6aa337 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/ContainerMetrics.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/ContainerMetrics.java
@@ -130,7 +130,7 @@ public class ContainerMetrics implements MetricsSource {
   /**
    * Simple metrics cache to help prevent re-registrations.
    */
-  protected final static Map<ContainerId, ContainerMetrics>
+  private final static Map<ContainerId, ContainerMetrics>
       usageMetrics = new HashMap<>();
   // Create a timer to unregister container metrics,
   // whose associated thread run as a daemon.


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[17/50] [abbrv] hadoop git commit: HADOOP-14730. Support protobuf FileStatus in AdlFileSystem.

Posted by wa...@apache.org.
HADOOP-14730. Support protobuf FileStatus in AdlFileSystem.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/55a181f8
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/55a181f8
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/55a181f8

Branch: refs/heads/YARN-5881
Commit: 55a181f845adcdcc9b008e9906ade1544fc220e4
Parents: 8d3fd81
Author: Chris Douglas <cd...@apache.org>
Authored: Mon Aug 7 21:31:28 2017 -0700
Committer: Chris Douglas <cd...@apache.org>
Committed: Mon Aug 7 21:31:28 2017 -0700

----------------------------------------------------------------------
 .../org/apache/hadoop/fs/adl/AdlFileStatus.java | 69 ++++++++++++++++++++
 .../org/apache/hadoop/fs/adl/AdlFileSystem.java | 27 ++------
 .../apache/hadoop/fs/adl/TestGetFileStatus.java | 57 ++++++++--------
 .../apache/hadoop/fs/adl/TestListStatus.java    |  8 ++-
 4 files changed, 105 insertions(+), 56 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/55a181f8/hadoop-tools/hadoop-azure-datalake/src/main/java/org/apache/hadoop/fs/adl/AdlFileStatus.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-azure-datalake/src/main/java/org/apache/hadoop/fs/adl/AdlFileStatus.java b/hadoop-tools/hadoop-azure-datalake/src/main/java/org/apache/hadoop/fs/adl/AdlFileStatus.java
new file mode 100644
index 0000000..70c005d
--- /dev/null
+++ b/hadoop-tools/hadoop-azure-datalake/src/main/java/org/apache/hadoop/fs/adl/AdlFileStatus.java
@@ -0,0 +1,69 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+package org.apache.hadoop.fs.adl;
+
+import com.microsoft.azure.datalake.store.DirectoryEntry;
+import com.microsoft.azure.datalake.store.DirectoryEntryType;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.Path;
+
+import static org.apache.hadoop.fs.adl.AdlConfKeys.ADL_BLOCK_SIZE;
+import static org.apache.hadoop.fs.adl.AdlConfKeys.ADL_REPLICATION_FACTOR;
+
+/**
+ * Shim class supporting linking against 2.x clients.
+ */
+class AdlFileStatus extends FileStatus {
+
+  private static final long serialVersionUID = 0x01fcbe5e;
+
+  private boolean hasAcl = false;
+
+  AdlFileStatus(DirectoryEntry entry, Path path, boolean hasAcl) {
+    this(entry, path, entry.user, entry.group, hasAcl);
+  }
+
+  AdlFileStatus(DirectoryEntry entry, Path path,
+                String owner, String group, boolean hasAcl) {
+    super(entry.length, DirectoryEntryType.DIRECTORY == entry.type,
+        ADL_REPLICATION_FACTOR, ADL_BLOCK_SIZE,
+        entry.lastModifiedTime.getTime(), entry.lastAccessTime.getTime(),
+        new AdlPermission(hasAcl, Short.parseShort(entry.permission, 8)),
+        owner, group, null, path);
+    this.hasAcl = hasAcl;
+  }
+
+  @Override
+  public boolean hasAcl() {
+    return hasAcl;
+  }
+
+  @Override
+  public boolean equals(Object o) {
+    // satisfy findbugs
+    return super.equals(o);
+  }
+
+  @Override
+  public int hashCode() {
+    // satisfy findbugs
+    return super.hashCode();
+  }
+
+}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/55a181f8/hadoop-tools/hadoop-azure-datalake/src/main/java/org/apache/hadoop/fs/adl/AdlFileSystem.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-azure-datalake/src/main/java/org/apache/hadoop/fs/adl/AdlFileSystem.java b/hadoop-tools/hadoop-azure-datalake/src/main/java/org/apache/hadoop/fs/adl/AdlFileSystem.java
index 0de538e..76ce43e 100644
--- a/hadoop-tools/hadoop-azure-datalake/src/main/java/org/apache/hadoop/fs/adl/AdlFileSystem.java
+++ b/hadoop-tools/hadoop-azure-datalake/src/main/java/org/apache/hadoop/fs/adl/AdlFileSystem.java
@@ -29,7 +29,6 @@ import com.google.common.annotations.VisibleForTesting;
 import com.microsoft.azure.datalake.store.ADLStoreClient;
 import com.microsoft.azure.datalake.store.ADLStoreOptions;
 import com.microsoft.azure.datalake.store.DirectoryEntry;
-import com.microsoft.azure.datalake.store.DirectoryEntryType;
 import com.microsoft.azure.datalake.store.IfExists;
 import com.microsoft.azure.datalake.store.LatencyTracker;
 import com.microsoft.azure.datalake.store.UserGroupRepresentation;
@@ -606,30 +605,12 @@ public class AdlFileSystem extends FileSystem {
   }
 
   private FileStatus toFileStatus(final DirectoryEntry entry, final Path f) {
-    boolean isDirectory = entry.type == DirectoryEntryType.DIRECTORY;
-    long lastModificationData = entry.lastModifiedTime.getTime();
-    long lastAccessTime = entry.lastAccessTime.getTime();
-    // set aclBit from ADLS backend response if
-    // ADL_SUPPORT_ACL_BIT_IN_FSPERMISSION is true.
-    final boolean aclBit = aclBitStatus ? entry.aclBit : false;
-
-    FsPermission permission = new AdlPermission(aclBit,
-        Short.valueOf(entry.permission, 8));
-    String user = entry.user;
-    String group = entry.group;
-
-    FileStatus status;
+    Path p = makeQualified(f);
+    boolean aclBit = aclBitStatus ? entry.aclBit : false;
     if (overrideOwner) {
-      status = new FileStatus(entry.length, isDirectory, ADL_REPLICATION_FACTOR,
-          ADL_BLOCK_SIZE, lastModificationData, lastAccessTime, permission,
-          userName, "hdfs", this.makeQualified(f));
-    } else {
-      status = new FileStatus(entry.length, isDirectory, ADL_REPLICATION_FACTOR,
-          ADL_BLOCK_SIZE, lastModificationData, lastAccessTime, permission,
-          user, group, this.makeQualified(f));
+      return new AdlFileStatus(entry, p, userName, "hdfs", aclBit);
     }
-
-    return status;
+    return new AdlFileStatus(entry, p, aclBit);
   }
 
   /**

http://git-wip-us.apache.org/repos/asf/hadoop/blob/55a181f8/hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/TestGetFileStatus.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/TestGetFileStatus.java b/hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/TestGetFileStatus.java
index 0ea4b86..d9e22db 100644
--- a/hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/TestGetFileStatus.java
+++ b/hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/TestGetFileStatus.java
@@ -42,8 +42,8 @@ import static org.apache.hadoop.fs.adl.AdlConfKeys.ADL_BLOCK_SIZE;
  * org.apache.hadoop.fs.adl.live testing package.
  */
 public class TestGetFileStatus extends AdlMockWebServer {
-  private static final Logger LOG = LoggerFactory
-      .getLogger(TestGetFileStatus.class);
+  private static final Logger LOG =
+      LoggerFactory.getLogger(TestGetFileStatus.class);
 
   @Test
   public void getFileStatusReturnsAsExpected()
@@ -72,33 +72,30 @@ public class TestGetFileStatus extends AdlMockWebServer {
         fileStatus.isErasureCoded());
   }
 
-    @Test
-    public void getFileStatusAclBit()
-            throws URISyntaxException, IOException {
-        // With ACLBIT set to true
-        getMockServer().enqueue(new MockResponse().setResponseCode(200)
-                .setBody(TestADLResponseData.getGetFileStatusJSONResponse(true)));
-        long startTime = Time.monotonicNow();
-        FileStatus fileStatus = getMockAdlFileSystem()
-                .getFileStatus(new Path("/test1/test2"));
-        long endTime = Time.monotonicNow();
-        LOG.debug("Time : " + (endTime - startTime));
-        Assert.assertTrue(fileStatus.isFile());
-        Assert.assertEquals(true, fileStatus.getPermission().getAclBit());
-        Assert.assertEquals(fileStatus.hasAcl(),
-            fileStatus.getPermission().getAclBit());
+  @Test
+  public void getFileStatusAclBit() throws URISyntaxException, IOException {
+    // With ACLBIT set to true
+    getMockServer().enqueue(new MockResponse().setResponseCode(200)
+            .setBody(TestADLResponseData.getGetFileStatusJSONResponse(true)));
+    long startTime = Time.monotonicNow();
+    FileStatus fileStatus = getMockAdlFileSystem()
+            .getFileStatus(new Path("/test1/test2"));
+    long endTime = Time.monotonicNow();
+    LOG.debug("Time : " + (endTime - startTime));
+    Assert.assertTrue(fileStatus.isFile());
+    Assert.assertTrue(fileStatus.hasAcl());
+    Assert.assertTrue(fileStatus.getPermission().getAclBit());
 
-        // With ACLBIT set to false
-        getMockServer().enqueue(new MockResponse().setResponseCode(200)
-                .setBody(TestADLResponseData.getGetFileStatusJSONResponse(false)));
-        startTime = Time.monotonicNow();
-        fileStatus = getMockAdlFileSystem()
-                .getFileStatus(new Path("/test1/test2"));
-        endTime = Time.monotonicNow();
-        LOG.debug("Time : " + (endTime - startTime));
-        Assert.assertTrue(fileStatus.isFile());
-        Assert.assertEquals(false, fileStatus.getPermission().getAclBit());
-        Assert.assertEquals(fileStatus.hasAcl(),
-            fileStatus.getPermission().getAclBit());
-    }
+    // With ACLBIT set to false
+    getMockServer().enqueue(new MockResponse().setResponseCode(200)
+            .setBody(TestADLResponseData.getGetFileStatusJSONResponse(false)));
+    startTime = Time.monotonicNow();
+    fileStatus = getMockAdlFileSystem()
+            .getFileStatus(new Path("/test1/test2"));
+    endTime = Time.monotonicNow();
+    LOG.debug("Time : " + (endTime - startTime));
+    Assert.assertTrue(fileStatus.isFile());
+    Assert.assertFalse(fileStatus.hasAcl());
+    Assert.assertFalse(fileStatus.getPermission().getAclBit());
+  }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/55a181f8/hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/TestListStatus.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/TestListStatus.java b/hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/TestListStatus.java
index dac8886..db32476 100644
--- a/hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/TestListStatus.java
+++ b/hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/TestListStatus.java
@@ -102,7 +102,7 @@ public class TestListStatus extends AdlMockWebServer {
   }
 
   @Test
-  public void listStatusAclBit()
+  public void listStatusAcl()
           throws URISyntaxException, IOException {
     // With ACLBIT set to true
     getMockServer().enqueue(new MockResponse().setResponseCode(200)
@@ -115,7 +115,8 @@ public class TestListStatus extends AdlMockWebServer {
     LOG.debug("Time : " + (endTime - startTime));
     for (int i = 0; i < ls.length; i++) {
       Assert.assertTrue(ls[i].isDirectory());
-      Assert.assertEquals(true, ls[i].getPermission().getAclBit());
+      Assert.assertTrue(ls[i].hasAcl());
+      Assert.assertTrue(ls[i].getPermission().getAclBit());
     }
 
     // With ACLBIT set to false
@@ -129,7 +130,8 @@ public class TestListStatus extends AdlMockWebServer {
     LOG.debug("Time : " + (endTime - startTime));
     for (int i = 0; i < ls.length; i++) {
       Assert.assertTrue(ls[i].isDirectory());
-      Assert.assertEquals(false, ls[i].getPermission().getAclBit());
+      Assert.assertFalse(ls[i].hasAcl());
+      Assert.assertFalse(ls[i].getPermission().getAclBit());
     }
   }
 }


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[32/50] [abbrv] hadoop git commit: HDFS-12117. HttpFS does not seem to support SNAPSHOT related methods for WebHDFS REST Interface. Contributed by Wellington Chevreuil.

Posted by wa...@apache.org.
HDFS-12117. HttpFS does not seem to support SNAPSHOT related methods for WebHDFS REST Interface. Contributed by Wellington Chevreuil.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/8a4bff02
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/8a4bff02
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/8a4bff02

Branch: refs/heads/YARN-5881
Commit: 8a4bff02c1534c6bf529726f2bbe414ac4c172e8
Parents: 9a3c237
Author: Wei-Chiu Chuang <we...@apache.org>
Authored: Tue Aug 8 23:58:53 2017 -0700
Committer: Wei-Chiu Chuang <we...@apache.org>
Committed: Tue Aug 8 23:58:53 2017 -0700

----------------------------------------------------------------------
 .../hadoop/fs/http/client/HttpFSFileSystem.java |  47 ++++++-
 .../hadoop/fs/http/server/FSOperations.java     | 105 ++++++++++++++
 .../http/server/HttpFSParametersProvider.java   |  45 ++++++
 .../hadoop/fs/http/server/HttpFSServer.java     |  36 +++++
 .../fs/http/client/BaseTestHttpFSWith.java      | 110 ++++++++++++++-
 .../hadoop/fs/http/server/TestHttpFSServer.java | 140 ++++++++++++++++++-
 6 files changed, 479 insertions(+), 4 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/8a4bff02/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java
index d139100..1059a02 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java
@@ -124,6 +124,8 @@ public class HttpFSFileSystem extends FileSystem
   public static final String POLICY_NAME_PARAM = "storagepolicy";
   public static final String OFFSET_PARAM = "offset";
   public static final String LENGTH_PARAM = "length";
+  public static final String SNAPSHOT_NAME_PARAM = "snapshotname";
+  public static final String OLD_SNAPSHOT_NAME_PARAM = "oldsnapshotname";
 
   public static final Short DEFAULT_PERMISSION = 0755;
   public static final String ACLSPEC_DEFAULT = "";
@@ -144,6 +146,8 @@ public class HttpFSFileSystem extends FileSystem
 
   public static final String UPLOAD_CONTENT_TYPE= "application/octet-stream";
 
+  public static final String SNAPSHOT_JSON = "Path";
+
   public enum FILE_TYPE {
     FILE, DIRECTORY, SYMLINK;
 
@@ -229,7 +233,9 @@ public class HttpFSFileSystem extends FileSystem
     DELETE(HTTP_DELETE), SETXATTR(HTTP_PUT), GETXATTRS(HTTP_GET),
     REMOVEXATTR(HTTP_PUT), LISTXATTRS(HTTP_GET), LISTSTATUS_BATCH(HTTP_GET),
     GETALLSTORAGEPOLICY(HTTP_GET), GETSTORAGEPOLICY(HTTP_GET),
-    SETSTORAGEPOLICY(HTTP_PUT), UNSETSTORAGEPOLICY(HTTP_POST);
+    SETSTORAGEPOLICY(HTTP_PUT), UNSETSTORAGEPOLICY(HTTP_POST),
+    CREATESNAPSHOT(HTTP_PUT), DELETESNAPSHOT(HTTP_DELETE),
+    RENAMESNAPSHOT(HTTP_PUT);
 
     private String httpMethod;
 
@@ -1434,4 +1440,43 @@ public class HttpFSFileSystem extends FileSystem
         Operation.UNSETSTORAGEPOLICY.getMethod(), params, src, true);
     HttpExceptionUtils.validateResponse(conn, HttpURLConnection.HTTP_OK);
   }
+
+  @Override
+  public final Path createSnapshot(Path path, String snapshotName)
+      throws IOException {
+    Map<String, String> params = new HashMap<String, String>();
+    params.put(OP_PARAM, Operation.CREATESNAPSHOT.toString());
+    if (snapshotName != null) {
+      params.put(SNAPSHOT_NAME_PARAM, snapshotName);
+    }
+    HttpURLConnection conn = getConnection(Operation.CREATESNAPSHOT.getMethod(),
+        params, path, true);
+    HttpExceptionUtils.validateResponse(conn, HttpURLConnection.HTTP_OK);
+    JSONObject json = (JSONObject) HttpFSUtils.jsonParse(conn);
+    return new Path((String) json.get(SNAPSHOT_JSON));
+  }
+
+  @Override
+  public void renameSnapshot(Path path, String snapshotOldName,
+                             String snapshotNewName) throws IOException {
+    Map<String, String> params = new HashMap<String, String>();
+    params.put(OP_PARAM, Operation.RENAMESNAPSHOT.toString());
+    params.put(SNAPSHOT_NAME_PARAM, snapshotNewName);
+    params.put(OLD_SNAPSHOT_NAME_PARAM, snapshotOldName);
+    HttpURLConnection conn = getConnection(Operation.RENAMESNAPSHOT.getMethod(),
+        params, path, true);
+    HttpExceptionUtils.validateResponse(conn, HttpURLConnection.HTTP_OK);
+  }
+
+  @Override
+  public void deleteSnapshot(Path path, String snapshotName)
+      throws IOException {
+    Map<String, String> params = new HashMap<String, String>();
+    params.put(OP_PARAM, Operation.DELETESNAPSHOT.toString());
+    params.put(SNAPSHOT_NAME_PARAM, snapshotName);
+    HttpURLConnection conn = getConnection(Operation.DELETESNAPSHOT.getMethod(),
+        params, path, true);
+    HttpExceptionUtils.validateResponse(conn, HttpURLConnection.HTTP_OK);
+  }
+
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/8a4bff02/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
index f1615c3..c008802 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
@@ -1492,4 +1492,109 @@ public class FSOperations {
       return JsonUtil.toJsonMap(locations);
     }
   }
+
+  /**
+   *  Executor that performs a createSnapshot FileSystemAccess operation.
+   */
+  @InterfaceAudience.Private
+  public static class FSCreateSnapshot implements
+      FileSystemAccess.FileSystemExecutor<String> {
+
+    private Path path;
+    private String snapshotName;
+
+    /**
+     * Creates a createSnapshot executor.
+     * @param path directory path to be snapshotted.
+     * @param snapshotName the snapshot name.
+     */
+    public FSCreateSnapshot(String path, String snapshotName) {
+      this.path = new Path(path);
+      this.snapshotName = snapshotName;
+    }
+
+    /**
+     * Executes the filesystem operation.
+     * @param fs filesystem instance to use.
+     * @return <code>Path</code> the complete path for newly created snapshot
+     * @throws IOException thrown if an IO error occurred.
+     */
+    @Override
+    public String execute(FileSystem fs) throws IOException {
+      Path snapshotPath = fs.createSnapshot(path, snapshotName);
+      JSONObject json = toJSON(HttpFSFileSystem.HOME_DIR_JSON,
+          snapshotPath.toString());
+      return json.toJSONString().replaceAll("\\\\", "");
+    }
+  }
+
+  /**
+   *  Executor that performs a deleteSnapshot FileSystemAccess operation.
+   */
+  @InterfaceAudience.Private
+  public static class FSDeleteSnapshot implements
+      FileSystemAccess.FileSystemExecutor<Void> {
+
+    private Path path;
+    private String snapshotName;
+
+    /**
+     * Creates a deleteSnapshot executor.
+     * @param path path for the snapshot to be deleted.
+     * @param snapshotName snapshot name.
+     */
+    public FSDeleteSnapshot(String path, String snapshotName) {
+      this.path = new Path(path);
+      this.snapshotName = snapshotName;
+    }
+
+    /**
+     * Executes the filesystem operation.
+     * @param fs filesystem instance to use.
+     * @return void
+     * @throws IOException thrown if an IO error occurred.
+     */
+    @Override
+    public Void execute(FileSystem fs) throws IOException {
+      fs.deleteSnapshot(path, snapshotName);
+      return null;
+    }
+  }
+
+  /**
+   *  Executor that performs a renameSnapshot FileSystemAccess operation.
+   */
+  @InterfaceAudience.Private
+  public static class FSRenameSnapshot implements
+      FileSystemAccess.FileSystemExecutor<Void> {
+    private Path path;
+    private String oldSnapshotName;
+    private String snapshotName;
+
+    /**
+     * Creates a renameSnapshot executor.
+     * @param path directory path of the snapshot to be renamed.
+     * @param oldSnapshotName current snapshot name.
+     * @param snapshotName new snapshot name to be set.
+     */
+    public FSRenameSnapshot(String path, String oldSnapshotName,
+                            String snapshotName) {
+      this.path = new Path(path);
+      this.oldSnapshotName = oldSnapshotName;
+      this.snapshotName = snapshotName;
+    }
+
+    /**
+     * Executes the filesystem operation.
+     * @param fs filesystem instance to use.
+     * @return void
+     * @throws IOException thrown if an IO error occurred.
+     */
+    @Override
+    public Void execute(FileSystem fs) throws IOException {
+      fs.renameSnapshot(path, oldSnapshotName, snapshotName);
+      return null;
+    }
+  }
+
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/8a4bff02/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSParametersProvider.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSParametersProvider.java b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSParametersProvider.java
index 347a747..5f265c0 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSParametersProvider.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSParametersProvider.java
@@ -100,6 +100,13 @@ public class HttpFSParametersProvider extends ParametersProvider {
     PARAMS_DEF.put(Operation.SETSTORAGEPOLICY,
         new Class[] {PolicyNameParam.class});
     PARAMS_DEF.put(Operation.UNSETSTORAGEPOLICY, new Class[] {});
+    PARAMS_DEF.put(Operation.CREATESNAPSHOT,
+            new Class[] {SnapshotNameParam.class});
+    PARAMS_DEF.put(Operation.DELETESNAPSHOT,
+            new Class[] {SnapshotNameParam.class});
+    PARAMS_DEF.put(Operation.RENAMESNAPSHOT,
+            new Class[] {OldSnapshotNameParam.class,
+                SnapshotNameParam.class});
   }
 
   public HttpFSParametersProvider() {
@@ -565,4 +572,42 @@ public class HttpFSParametersProvider extends ParametersProvider {
       super(NAME, null);
     }
   }
+
+  /**
+   * Class for SnapshotName parameter.
+   */
+  public static class SnapshotNameParam extends StringParam {
+
+    /**
+     * Parameter name.
+     */
+    public static final String NAME = HttpFSFileSystem.SNAPSHOT_NAME_PARAM;
+
+    /**
+     * Constructor.
+     */
+    public SnapshotNameParam() {
+      super(NAME, null);
+    }
+
+  }
+
+  /**
+   * Class for OldSnapshotName parameter.
+   */
+  public static class OldSnapshotNameParam extends StringParam {
+
+    /**
+     * Parameter name.
+     */
+    public static final String NAME = HttpFSFileSystem.OLD_SNAPSHOT_NAME_PARAM;
+
+    /**
+     * Constructor.
+     */
+    public OldSnapshotNameParam() {
+      super(NAME, null);
+    }
+  }
+
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/8a4bff02/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSServer.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSServer.java b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSServer.java
index 5c0c9b5..03ccb4c 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSServer.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSServer.java
@@ -37,6 +37,7 @@ import org.apache.hadoop.fs.http.server.HttpFSParametersProvider.LenParam;
 import org.apache.hadoop.fs.http.server.HttpFSParametersProvider.ModifiedTimeParam;
 import org.apache.hadoop.fs.http.server.HttpFSParametersProvider.NewLengthParam;
 import org.apache.hadoop.fs.http.server.HttpFSParametersProvider.OffsetParam;
+import org.apache.hadoop.fs.http.server.HttpFSParametersProvider.OldSnapshotNameParam;
 import org.apache.hadoop.fs.http.server.HttpFSParametersProvider.OperationParam;
 import org.apache.hadoop.fs.http.server.HttpFSParametersProvider.OverwriteParam;
 import org.apache.hadoop.fs.http.server.HttpFSParametersProvider.OwnerParam;
@@ -45,6 +46,7 @@ import org.apache.hadoop.fs.http.server.HttpFSParametersProvider.PolicyNameParam
 import org.apache.hadoop.fs.http.server.HttpFSParametersProvider.RecursiveParam;
 import org.apache.hadoop.fs.http.server.HttpFSParametersProvider.ReplicationParam;
 import org.apache.hadoop.fs.http.server.HttpFSParametersProvider.SourcesParam;
+import org.apache.hadoop.fs.http.server.HttpFSParametersProvider.SnapshotNameParam;
 import org.apache.hadoop.fs.http.server.HttpFSParametersProvider.XAttrEncodingParam;
 import org.apache.hadoop.fs.http.server.HttpFSParametersProvider.XAttrNameParam;
 import org.apache.hadoop.fs.http.server.HttpFSParametersProvider.XAttrSetFlagParam;
@@ -430,6 +432,16 @@ public class HttpFSServer {
         response = Response.ok(json).type(MediaType.APPLICATION_JSON).build();
         break;
       }
+      case DELETESNAPSHOT: {
+        String snapshotName = params.get(SnapshotNameParam.NAME,
+            SnapshotNameParam.class);
+        FSOperations.FSDeleteSnapshot command =
+                new FSOperations.FSDeleteSnapshot(path, snapshotName);
+        fsExecute(user, command);
+        AUDIT_LOG.info("[{}] deleted snapshot [{}]", path, snapshotName);
+        response = Response.ok().build();
+        break;
+      }
       default: {
         throw new IOException(
           MessageFormat.format("Invalid HTTP DELETE operation [{0}]",
@@ -602,6 +614,16 @@ public class HttpFSServer {
         }
         break;
       }
+      case CREATESNAPSHOT: {
+        String snapshotName = params.get(SnapshotNameParam.NAME,
+            SnapshotNameParam.class);
+        FSOperations.FSCreateSnapshot command =
+            new FSOperations.FSCreateSnapshot(path, snapshotName);
+        String json = fsExecute(user, command);
+        AUDIT_LOG.info("[{}] snapshot created as [{}]", path, snapshotName);
+        response = Response.ok(json).type(MediaType.APPLICATION_JSON).build();
+        break;
+      }
       case SETXATTR: {
         String xattrName = params.get(XAttrNameParam.NAME, 
             XAttrNameParam.class);
@@ -617,6 +639,20 @@ public class HttpFSServer {
         response = Response.ok().build();
         break;
       }
+      case RENAMESNAPSHOT: {
+        String oldSnapshotName = params.get(OldSnapshotNameParam.NAME,
+            OldSnapshotNameParam.class);
+        String snapshotName = params.get(SnapshotNameParam.NAME,
+            SnapshotNameParam.class);
+        FSOperations.FSRenameSnapshot command =
+                new FSOperations.FSRenameSnapshot(path, oldSnapshotName,
+                    snapshotName);
+        fsExecute(user, command);
+        AUDIT_LOG.info("[{}] renamed snapshot [{}] to [{}]", path,
+            oldSnapshotName, snapshotName);
+        response = Response.ok().build();
+        break;
+      }
       case REMOVEXATTR: {
         String xattrName = params.get(XAttrNameParam.NAME, XAttrNameParam.class);
         FSOperations.FSRemoveXAttr command = new FSOperations.FSRemoveXAttr(

http://git-wip-us.apache.org/repos/asf/hadoop/blob/8a4bff02/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java
index ca11c66..553bbce 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java
@@ -38,6 +38,7 @@ import org.apache.hadoop.fs.permission.FsPermission;
 import org.apache.hadoop.hdfs.AppendTestUtil;
 import org.apache.hadoop.hdfs.DFSConfigKeys;
 import org.apache.hadoop.hdfs.DFSTestUtil;
+import org.apache.hadoop.hdfs.DistributedFileSystem;
 import org.apache.hadoop.hdfs.protocol.BlockStoragePolicy;
 import org.apache.hadoop.hdfs.protocol.HdfsConstants;
 import org.apache.hadoop.security.UserGroupInformation;
@@ -74,6 +75,7 @@ import java.util.Arrays;
 import java.util.Collection;
 import java.util.List;
 import java.util.Map;
+import java.util.regex.Pattern;
 
 import static org.junit.Assert.assertArrayEquals;
 import static org.junit.Assert.assertEquals;
@@ -1034,11 +1036,12 @@ public abstract class BaseTestHttpFSWith extends HFSTestCase {
   }
 
   protected enum Operation {
-    GET, OPEN, CREATE, APPEND, TRUNCATE, CONCAT, RENAME, DELETE, LIST_STATUS, 
+    GET, OPEN, CREATE, APPEND, TRUNCATE, CONCAT, RENAME, DELETE, LIST_STATUS,
     WORKING_DIRECTORY, MKDIRS, SET_TIMES, SET_PERMISSION, SET_OWNER,
     SET_REPLICATION, CHECKSUM, CONTENT_SUMMARY, FILEACLS, DIRACLS, SET_XATTR,
     GET_XATTRS, REMOVE_XATTR, LIST_XATTRS, ENCRYPTION, LIST_STATUS_BATCH,
-    GETTRASHROOT, STORAGEPOLICY, ERASURE_CODING, GETFILEBLOCKLOCATIONS
+    GETTRASHROOT, STORAGEPOLICY, ERASURE_CODING, GETFILEBLOCKLOCATIONS,
+    CREATE_SNAPSHOT, RENAME_SNAPSHOT, DELETE_SNAPSHOT
   }
 
   private void operation(Operation op) throws Exception {
@@ -1130,6 +1133,15 @@ public abstract class BaseTestHttpFSWith extends HFSTestCase {
     case GETFILEBLOCKLOCATIONS:
       testGetFileBlockLocations();
       break;
+    case CREATE_SNAPSHOT:
+      testCreateSnapshot();
+      break;
+    case RENAME_SNAPSHOT:
+      testRenameSnapshot();
+      break;
+    case DELETE_SNAPSHOT:
+      testDeleteSnapshot();
+      break;
     }
   }
 
@@ -1257,4 +1269,98 @@ public abstract class BaseTestHttpFSWith extends HFSTestCase {
           location2.getTopologyPaths());
     }
   }
+
+  private void testCreateSnapshot(String snapshotName) throws Exception {
+    if (!this.isLocalFS()) {
+      Path snapshottablePath = new Path("/tmp/tmp-snap-test");
+      createSnapshotTestsPreconditions(snapshottablePath);
+      //Now get the FileSystem instance that's being tested
+      FileSystem fs = this.getHttpFSFileSystem();
+      if (snapshotName == null) {
+        fs.createSnapshot(snapshottablePath);
+      } else {
+        fs.createSnapshot(snapshottablePath, snapshotName);
+      }
+      Path snapshotsDir = new Path("/tmp/tmp-snap-test/.snapshot");
+      FileStatus[] snapshotItems = fs.listStatus(snapshotsDir);
+      assertTrue("Should have exactly one snapshot.",
+          snapshotItems.length == 1);
+      String resultingSnapName = snapshotItems[0].getPath().getName();
+      if (snapshotName == null) {
+        assertTrue("Snapshot auto generated name not matching pattern",
+            Pattern.matches("(s)(\\d{8})(-)(\\d{6})(\\.)(\\d{3})",
+                resultingSnapName));
+      } else {
+        assertTrue("Snapshot name is not same as passed name.",
+            snapshotName.equals(resultingSnapName));
+      }
+      cleanSnapshotTests(snapshottablePath, resultingSnapName);
+    }
+  }
+
+  private void testCreateSnapshot() throws Exception {
+    testCreateSnapshot(null);
+    testCreateSnapshot("snap-with-name");
+  }
+
+  private void createSnapshotTestsPreconditions(Path snapshottablePath)
+      throws Exception {
+    //Needed to get a DistributedFileSystem instance, in order to
+    //call allowSnapshot on the newly created directory
+    DistributedFileSystem distributedFs = (DistributedFileSystem)
+        FileSystem.get(snapshottablePath.toUri(), this.getProxiedFSConf());
+    distributedFs.mkdirs(snapshottablePath);
+    distributedFs.allowSnapshot(snapshottablePath);
+    Path subdirPath = new Path("/tmp/tmp-snap-test/subdir");
+    distributedFs.mkdirs(subdirPath);
+
+  }
+
+  private void cleanSnapshotTests(Path snapshottablePath,
+                                  String resultingSnapName) throws Exception {
+    DistributedFileSystem distributedFs = (DistributedFileSystem)
+        FileSystem.get(snapshottablePath.toUri(), this.getProxiedFSConf());
+    distributedFs.deleteSnapshot(snapshottablePath, resultingSnapName);
+    distributedFs.delete(snapshottablePath, true);
+  }
+
+  private void testRenameSnapshot() throws Exception {
+    if (!this.isLocalFS()) {
+      Path snapshottablePath = new Path("/tmp/tmp-snap-test");
+      createSnapshotTestsPreconditions(snapshottablePath);
+      //Now get the FileSystem instance that's being tested
+      FileSystem fs = this.getHttpFSFileSystem();
+      fs.createSnapshot(snapshottablePath, "snap-to-rename");
+      fs.renameSnapshot(snapshottablePath, "snap-to-rename",
+          "snap-new-name");
+      Path snapshotsDir = new Path("/tmp/tmp-snap-test/.snapshot");
+      FileStatus[] snapshotItems = fs.listStatus(snapshotsDir);
+      assertTrue("Should have exactly one snapshot.",
+          snapshotItems.length == 1);
+      String resultingSnapName = snapshotItems[0].getPath().getName();
+      assertTrue("Snapshot name is not same as passed name.",
+          "snap-new-name".equals(resultingSnapName));
+      cleanSnapshotTests(snapshottablePath, resultingSnapName);
+    }
+  }
+
+  private void testDeleteSnapshot() throws Exception {
+    if (!this.isLocalFS()) {
+      Path snapshottablePath = new Path("/tmp/tmp-snap-test");
+      createSnapshotTestsPreconditions(snapshottablePath);
+      //Now get the FileSystem instance that's being tested
+      FileSystem fs = this.getHttpFSFileSystem();
+      fs.createSnapshot(snapshottablePath, "snap-to-delete");
+      Path snapshotsDir = new Path("/tmp/tmp-snap-test/.snapshot");
+      FileStatus[] snapshotItems = fs.listStatus(snapshotsDir);
+      assertTrue("Should have exactly one snapshot.",
+          snapshotItems.length == 1);
+      fs.deleteSnapshot(snapshottablePath, "snap-to-delete");
+      snapshotItems = fs.listStatus(snapshotsDir);
+      assertTrue("There should be no snapshot anymore.",
+          snapshotItems.length == 0);
+      fs.delete(snapshottablePath, true);
+    }
+  }
+
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/8a4bff02/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServer.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServer.java b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServer.java
index 0e1cc20..60e70d2 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServer.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServer.java
@@ -18,6 +18,7 @@
 package org.apache.hadoop.fs.http.server;
 
 import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.DistributedFileSystem;
 import org.apache.hadoop.security.authentication.util.SignerSecretProvider;
 import org.apache.hadoop.security.authentication.util.StringSignerSecretProviderCreator;
 import org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier;
@@ -71,6 +72,7 @@ import org.eclipse.jetty.webapp.WebAppContext;
 
 import com.google.common.collect.Maps;
 import java.util.Properties;
+import java.util.regex.Pattern;
 import org.apache.hadoop.security.authentication.server.AuthenticationFilter;
 
 /**
@@ -465,6 +467,20 @@ public class TestHttpFSServer extends HFSTestCase {
    */
   private void putCmd(String filename, String command,
                       String params) throws Exception {
+    Assert.assertEquals(HttpURLConnection.HTTP_OK,
+            putCmdWithReturn(filename, command, params).getResponseCode());
+  }
+
+  /**
+   * General-purpose http PUT command to the httpfs server,
+   * which returns relted HttpURLConnection instance.
+   * @param filename The file to operate upon
+   * @param command The command to perform (SETACL, etc)
+   * @param params Parameters, like "aclspec=..."
+   * @return HttpURLConnection the HttpURLConnection instance for the given PUT
+   */
+  private HttpURLConnection putCmdWithReturn(String filename, String command,
+                      String params) throws Exception {
     String user = HadoopUsersConfTestHelper.getHadoopUsers()[0];
     // Remove leading / from filename
     if (filename.charAt(0) == '/') {
@@ -478,7 +494,7 @@ public class TestHttpFSServer extends HFSTestCase {
     HttpURLConnection conn = (HttpURLConnection) url.openConnection();
     conn.setRequestMethod("PUT");
     conn.connect();
-    Assert.assertEquals(HttpURLConnection.HTTP_OK, conn.getResponseCode());
+    return conn;
   }
 
   /**
@@ -882,6 +898,108 @@ public class TestHttpFSServer extends HFSTestCase {
     delegationTokenCommonTests(false);
   }
 
+  private HttpURLConnection snapshotTestPreconditions(String httpMethod,
+                                                      String snapOperation,
+                                                      String additionalParams)
+      throws Exception {
+    String user = HadoopUsersConfTestHelper.getHadoopUsers()[0];
+    URL url = new URL(TestJettyHelper.getJettyURL(), MessageFormat.format(
+        "/webhdfs/v1/tmp/tmp-snap-test/subdir?user.name={0}&op=MKDIRS",
+        user));
+    HttpURLConnection conn = (HttpURLConnection) url.openConnection();
+    conn.setRequestMethod("PUT");
+    conn.connect();
+
+    Assert.assertEquals(conn.getResponseCode(), HttpURLConnection.HTTP_OK);
+
+    //needed to make the given dir snapshottable
+    Path snapshottablePath = new Path("/tmp/tmp-snap-test");
+    DistributedFileSystem dfs =
+        (DistributedFileSystem) FileSystem.get(snapshottablePath.toUri(),
+        TestHdfsHelper.getHdfsConf());
+    dfs.allowSnapshot(snapshottablePath);
+
+    //Try to create snapshot passing snapshot name
+    url = new URL(TestJettyHelper.getJettyURL(), MessageFormat.format(
+        "/webhdfs/v1/tmp/tmp-snap-test?user.name={0}&op={1}&{2}", user,
+        snapOperation, additionalParams));
+    conn = (HttpURLConnection) url.openConnection();
+    conn.setRequestMethod(httpMethod);
+    conn.connect();
+    return conn;
+  }
+
+  @Test
+  @TestDir
+  @TestJetty
+  @TestHdfs
+  public void testCreateSnapshot() throws Exception {
+    createHttpFSServer(false, false);
+    final HttpURLConnection conn = snapshotTestPreconditions("PUT",
+        "CREATESNAPSHOT",
+        "snapshotname=snap-with-name");
+    Assert.assertEquals(conn.getResponseCode(), HttpURLConnection.HTTP_OK);
+    final BufferedReader reader =
+        new BufferedReader(new InputStreamReader(conn.getInputStream()));
+    String result = reader.readLine();
+    //Validates if the content format is correct
+    Assert.assertTrue(result.
+        equals("{\"Path\":\"/tmp/tmp-snap-test/.snapshot/snap-with-name\"}"));
+    //Validates if the snapshot is properly created under .snapshot folder
+    result = getStatus("/tmp/tmp-snap-test/.snapshot",
+        "LISTSTATUS");
+    Assert.assertTrue(result.contains("snap-with-name"));
+  }
+
+  @Test
+  @TestDir
+  @TestJetty
+  @TestHdfs
+  public void testCreateSnapshotNoSnapshotName() throws Exception {
+    createHttpFSServer(false, false);
+    final HttpURLConnection conn = snapshotTestPreconditions("PUT",
+        "CREATESNAPSHOT",
+        "");
+    Assert.assertEquals(conn.getResponseCode(), HttpURLConnection.HTTP_OK);
+    final BufferedReader reader = new BufferedReader(
+        new InputStreamReader(conn.getInputStream()));
+    String result = reader.readLine();
+    //Validates if the content format is correct
+    Assert.assertTrue(Pattern.matches(
+        "(\\{\\\"Path\\\"\\:\\\"/tmp/tmp-snap-test/.snapshot/s)" +
+            "(\\d{8})(-)(\\d{6})(\\.)(\\d{3})(\\\"\\})", result));
+    //Validates if the snapshot is properly created under .snapshot folder
+    result = getStatus("/tmp/tmp-snap-test/.snapshot",
+        "LISTSTATUS");
+
+    Assert.assertTrue(Pattern.matches("(.+)(\\\"pathSuffix\\\":\\\"s)" +
+            "(\\d{8})(-)(\\d{6})(\\.)(\\d{3})(\\\")(.+)",
+        result));
+  }
+
+  @Test
+  @TestDir
+  @TestJetty
+  @TestHdfs
+  public void testRenameSnapshot() throws Exception {
+    createHttpFSServer(false, false);
+    HttpURLConnection conn = snapshotTestPreconditions("PUT",
+        "CREATESNAPSHOT",
+        "snapshotname=snap-to-rename");
+    Assert.assertEquals(conn.getResponseCode(), HttpURLConnection.HTTP_OK);
+    conn = snapshotTestPreconditions("PUT",
+        "RENAMESNAPSHOT",
+        "oldsnapshotname=snap-to-rename" +
+            "&snapshotname=snap-renamed");
+    Assert.assertEquals(conn.getResponseCode(), HttpURLConnection.HTTP_OK);
+    //Validates the snapshot is properly renamed under .snapshot folder
+    String result = getStatus("/tmp/tmp-snap-test/.snapshot",
+        "LISTSTATUS");
+    Assert.assertTrue(result.contains("snap-renamed"));
+    //There should be no snapshot named snap-to-rename now
+    Assert.assertFalse(result.contains("snap-to-rename"));
+  }
+
   @Test
   @TestDir
   @TestJetty
@@ -890,4 +1008,24 @@ public class TestHttpFSServer extends HFSTestCase {
     createHttpFSServer(true, true);
     delegationTokenCommonTests(true);
   }
+
+  @Test
+  @TestDir
+  @TestJetty
+  @TestHdfs
+  public void testDeleteSnapshot() throws Exception {
+    createHttpFSServer(false, false);
+    HttpURLConnection conn = snapshotTestPreconditions("PUT",
+        "CREATESNAPSHOT",
+        "snapshotname=snap-to-delete");
+    Assert.assertEquals(conn.getResponseCode(), HttpURLConnection.HTTP_OK);
+    conn = snapshotTestPreconditions("DELETE",
+        "DELETESNAPSHOT",
+        "snapshotname=snap-to-delete");
+    Assert.assertEquals(conn.getResponseCode(), HttpURLConnection.HTTP_OK);
+    //Validates the snapshot is not under .snapshot folder anymore
+    String result = getStatus("/tmp/tmp-snap-test/.snapshot",
+        "LISTSTATUS");
+    Assert.assertFalse(result.contains("snap-to-delete"));
+  }
 }


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[11/50] [abbrv] hadoop git commit: HDFS-12306. Add audit log for some erasure coding operations. Contributed by Huafeng Wang

Posted by wa...@apache.org.
HDFS-12306. Add audit log for some erasure coding operations. Contributed by Huafeng Wang


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0b674360
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0b674360
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0b674360

Branch: refs/heads/YARN-5881
Commit: 0b67436068899497e99c86f37fd4887ca188fae2
Parents: b0fbf17
Author: Kai Zheng <ka...@intel.com>
Authored: Mon Aug 7 19:30:10 2017 +0800
Committer: Kai Zheng <ka...@intel.com>
Committed: Mon Aug 7 19:30:10 2017 +0800

----------------------------------------------------------------------
 .../hdfs/server/namenode/FSNamesystem.java      | 48 ++++++++++++--------
 .../hdfs/server/namenode/NameNodeRpcServer.java |  2 +-
 2 files changed, 29 insertions(+), 21 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/0b674360/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
index 229de05..b1639b2 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
@@ -7055,18 +7055,13 @@ public class FSNamesystem implements Namesystem, FSNamesystemMBean,
       resultingStat = FSDirErasureCodingOp.setErasureCodingPolicy(this,
           srcArg, ecPolicyName, pc, logRetryCache);
       success = true;
-    } catch (AccessControlException ace) {
-      logAuditEvent(success, operationName, srcArg, null,
-          resultingStat);
-      throw ace;
     } finally {
       writeUnlock(operationName);
       if (success) {
         getEditLog().logSync();
       }
+      logAuditEvent(success, operationName, srcArg, null, resultingStat);
     }
-    logAuditEvent(success, operationName, srcArg, null,
-        resultingStat);
   }
 
   /**
@@ -7074,9 +7069,9 @@ public class FSNamesystem implements Namesystem, FSNamesystemMBean,
    * @param policies The policies to add.
    * @return The according result of add operation.
    */
-  AddECPolicyResponse[] addECPolicies(ErasureCodingPolicy[] policies)
+  AddECPolicyResponse[] addErasureCodingPolicies(ErasureCodingPolicy[] policies)
       throws IOException {
-    final String operationName = "addECPolicies";
+    final String operationName = "addErasureCodingPolicies";
     String addECPolicyName = "";
     checkOperation(OperationCategory.WRITE);
     List<AddECPolicyResponse> responses = new ArrayList<>();
@@ -7201,18 +7196,13 @@ public class FSNamesystem implements Namesystem, FSNamesystemMBean,
       resultingStat = FSDirErasureCodingOp.unsetErasureCodingPolicy(this,
           srcArg, pc, logRetryCache);
       success = true;
-    } catch (AccessControlException ace) {
-      logAuditEvent(success, operationName, srcArg, null,
-          resultingStat);
-      throw ace;
     } finally {
       writeUnlock(operationName);
       if (success) {
         getEditLog().logSync();
       }
+      logAuditEvent(success, operationName, srcArg, null, resultingStat);
     }
-    logAuditEvent(success, operationName, srcArg, null,
-        resultingStat);
   }
 
   /**
@@ -7220,14 +7210,20 @@ public class FSNamesystem implements Namesystem, FSNamesystemMBean,
    */
   ErasureCodingPolicy getErasureCodingPolicy(String src)
       throws AccessControlException, UnresolvedLinkException, IOException {
+    final String operationName = "getErasureCodingPolicy";
+    boolean success = false;
     checkOperation(OperationCategory.READ);
     FSPermissionChecker pc = getPermissionChecker();
     readLock();
     try {
       checkOperation(OperationCategory.READ);
-      return FSDirErasureCodingOp.getErasureCodingPolicy(this, src, pc);
+      final ErasureCodingPolicy ret =
+          FSDirErasureCodingOp.getErasureCodingPolicy(this, src, pc);
+      success = true;
+      return ret;
     } finally {
-      readUnlock("getErasureCodingPolicy");
+      readUnlock(operationName);
+      logAuditEvent(success, operationName, null);
     }
   }
 
@@ -7235,13 +7231,19 @@ public class FSNamesystem implements Namesystem, FSNamesystemMBean,
    * Get available erasure coding polices
    */
   ErasureCodingPolicy[] getErasureCodingPolicies() throws IOException {
+    final String operationName = "getErasureCodingPolicies";
+    boolean success = false;
     checkOperation(OperationCategory.READ);
     readLock();
     try {
       checkOperation(OperationCategory.READ);
-      return FSDirErasureCodingOp.getErasureCodingPolicies(this);
+      final ErasureCodingPolicy[] ret =
+          FSDirErasureCodingOp.getErasureCodingPolicies(this);
+      success = true;
+      return ret;
     } finally {
-      readUnlock("getErasureCodingPolicies");
+      readUnlock(operationName);
+      logAuditEvent(success, operationName, null);
     }
   }
 
@@ -7249,13 +7251,19 @@ public class FSNamesystem implements Namesystem, FSNamesystemMBean,
    * Get available erasure coding codecs and corresponding coders.
    */
   HashMap<String, String> getErasureCodingCodecs() throws IOException {
+    final String operationName = "getErasureCodingCodecs";
+    boolean success = false;
     checkOperation(OperationCategory.READ);
     readLock();
     try {
       checkOperation(OperationCategory.READ);
-      return FSDirErasureCodingOp.getErasureCodingCodecs(this);
+      final HashMap<String, String> ret =
+          FSDirErasureCodingOp.getErasureCodingCodecs(this);
+      success = true;
+      return ret;
     } finally {
-      readUnlock("getErasureCodingCodecs");
+      readUnlock(operationName);
+      logAuditEvent(success, operationName, null);
     }
   }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/0b674360/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
index 52b422c..9265381 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
@@ -2298,7 +2298,7 @@ public class NameNodeRpcServer implements NamenodeProtocols {
       ErasureCodingPolicy[] policies) throws IOException {
     checkNNStartup();
     namesystem.checkSuperuserPrivilege();
-    return namesystem.addECPolicies(policies);
+    return namesystem.addErasureCodingPolicies(policies);
   }
 
   @Override


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[28/50] [abbrv] hadoop git commit: HDFS-11975. Provide a system-default EC policy. Contributed by Huichun Lu

Posted by wa...@apache.org.
HDFS-11975. Provide a system-default EC policy. Contributed by Huichun Lu


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a53b8b6f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a53b8b6f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a53b8b6f

Branch: refs/heads/YARN-5881
Commit: a53b8b6fdce111b1e35ad0dc563eb53d1c58462f
Parents: ad2a350
Author: Kai Zheng <ka...@intel.com>
Authored: Wed Aug 9 10:12:58 2017 +0800
Committer: Kai Zheng <ka...@intel.com>
Committed: Wed Aug 9 10:12:58 2017 +0800

----------------------------------------------------------------------
 .../hadoop/hdfs/DistributedFileSystem.java      |  2 --
 .../ClientNamenodeProtocolTranslatorPB.java     |  4 ++-
 .../src/main/proto/erasurecoding.proto          |  2 +-
 .../org/apache/hadoop/hdfs/DFSConfigKeys.java   |  4 +++
 ...tNamenodeProtocolServerSideTranslatorPB.java |  4 ++-
 .../namenode/ErasureCodingPolicyManager.java    | 12 +++++--
 .../hdfs/server/namenode/NameNodeRpcServer.java | 14 +++++++-
 .../org/apache/hadoop/hdfs/tools/ECAdmin.java   | 14 ++++----
 .../src/main/resources/hdfs-default.xml         |  8 +++++
 .../src/site/markdown/HDFSErasureCoding.md      |  8 +++++
 .../hadoop/hdfs/TestErasureCodingPolicies.java  | 24 ++++++++++++--
 .../server/namenode/TestEnabledECPolicies.java  | 10 +++---
 .../test/resources/testErasureCodingConf.xml    | 35 ++++++++++++++++++++
 13 files changed, 117 insertions(+), 24 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a53b8b6f/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
index 13c5eb9..cd368d4 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
@@ -2515,8 +2515,6 @@ public class DistributedFileSystem extends FileSystem {
   public void setErasureCodingPolicy(final Path path,
       final String ecPolicyName) throws IOException {
     Path absF = fixRelativePart(path);
-    Preconditions.checkNotNull(ecPolicyName, "Erasure coding policy cannot be" +
-        " null.");
     new FileSystemLinkResolver<Void>() {
       @Override
       public Void doCall(final Path p) throws IOException {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a53b8b6f/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
index 388788c..aed4117 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
@@ -1518,7 +1518,9 @@ public class ClientNamenodeProtocolTranslatorPB implements
     final SetErasureCodingPolicyRequestProto.Builder builder =
         SetErasureCodingPolicyRequestProto.newBuilder();
     builder.setSrc(src);
-    builder.setEcPolicyName(ecPolicyName);
+    if (ecPolicyName != null) {
+      builder.setEcPolicyName(ecPolicyName);
+    }
     SetErasureCodingPolicyRequestProto req = builder.build();
     try {
       rpcProxy.setErasureCodingPolicy(null, req);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a53b8b6f/hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/erasurecoding.proto
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/erasurecoding.proto b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/erasurecoding.proto
index 65baab6..9f80350 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/erasurecoding.proto
+++ b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/erasurecoding.proto
@@ -25,7 +25,7 @@ import "hdfs.proto";
 
 message SetErasureCodingPolicyRequestProto {
   required string src = 1;
-  required string ecPolicyName = 2;
+  optional string ecPolicyName = 2;
 }
 
 message SetErasureCodingPolicyResponseProto {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a53b8b6f/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
index d9568f2..dc9bf76 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
@@ -564,6 +564,10 @@ public class DFSConfigKeys extends CommonConfigurationKeys {
   public static final String  DFS_NAMENODE_EC_POLICIES_ENABLED_DEFAULT = "";
   public static final String  DFS_NAMENODE_EC_POLICIES_MAX_CELLSIZE_KEY = "dfs.namenode.ec.policies.max.cellsize";
   public static final int     DFS_NAMENODE_EC_POLICIES_MAX_CELLSIZE_DEFAULT = 4 * 1024 * 1024;
+  public static final String  DFS_NAMENODE_EC_SYSTEM_DEFAULT_POLICY =
+      "dfs.namenode.ec.system.default.policy";
+  public static final String  DFS_NAMENODE_EC_SYSTEM_DEFAULT_POLICY_DEFAULT =
+      "RS-6-3-64k";
   public static final String  DFS_DN_EC_RECONSTRUCTION_STRIPED_READ_THREADS_KEY = "dfs.datanode.ec.reconstruction.stripedread.threads";
   public static final int     DFS_DN_EC_RECONSTRUCTION_STRIPED_READ_THREADS_DEFAULT = 20;
   public static final String  DFS_DN_EC_RECONSTRUCTION_STRIPED_READ_BUFFER_SIZE_KEY = "dfs.datanode.ec.reconstruction.stripedread.buffer.size";

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a53b8b6f/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
index 4ac49fe..38b81c6 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
@@ -1488,7 +1488,9 @@ public class ClientNamenodeProtocolServerSideTranslatorPB implements
       RpcController controller, SetErasureCodingPolicyRequestProto req)
       throws ServiceException {
     try {
-      server.setErasureCodingPolicy(req.getSrc(), req.getEcPolicyName());
+      String ecPolicyName = req.hasEcPolicyName() ?
+          req.getEcPolicyName() : null;
+      server.setErasureCodingPolicy(req.getSrc(), ecPolicyName);
       return SetErasureCodingPolicyResponseProto.newBuilder().build();
     } catch (IOException e) {
       throw new ServiceException(e);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a53b8b6f/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ErasureCodingPolicyManager.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ErasureCodingPolicyManager.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ErasureCodingPolicyManager.java
index 266d45c..18b8e8a 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ErasureCodingPolicyManager.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ErasureCodingPolicyManager.java
@@ -17,6 +17,7 @@
  */
 package org.apache.hadoop.hdfs.server.namenode;
 
+import org.apache.commons.lang.ArrayUtils;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hdfs.DFSConfigKeys;
@@ -92,9 +93,14 @@ public final class ErasureCodingPolicyManager {
 
   public void init(Configuration conf) {
     // Populate the list of enabled policies from configuration
-    final String[] policyNames = conf.getTrimmedStrings(
-        DFSConfigKeys.DFS_NAMENODE_EC_POLICIES_ENABLED_KEY,
-        DFSConfigKeys.DFS_NAMENODE_EC_POLICIES_ENABLED_DEFAULT);
+    final String[] enablePolicyNames = conf.getTrimmedStrings(
+            DFSConfigKeys.DFS_NAMENODE_EC_POLICIES_ENABLED_KEY,
+            DFSConfigKeys.DFS_NAMENODE_EC_POLICIES_ENABLED_DEFAULT);
+    final String defaultPolicyName = conf.getTrimmed(
+            DFSConfigKeys.DFS_NAMENODE_EC_SYSTEM_DEFAULT_POLICY,
+            DFSConfigKeys.DFS_NAMENODE_EC_SYSTEM_DEFAULT_POLICY_DEFAULT);
+    final String[] policyNames =
+            (String[]) ArrayUtils.add(enablePolicyNames, defaultPolicyName);
     this.userPoliciesByID = new TreeMap<>();
     this.userPoliciesByName = new TreeMap<>();
     this.removedPoliciesByName = new TreeMap<>();

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a53b8b6f/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
index 9265381..d304d3d 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
@@ -251,13 +251,15 @@ public class NameNodeRpcServer implements NamenodeProtocols {
   
   private final String minimumDataNodeVersion;
 
+  private final String defaultECPolicyName;
+
   public NameNodeRpcServer(Configuration conf, NameNode nn)
       throws IOException {
     this.nn = nn;
     this.namesystem = nn.getNamesystem();
     this.retryCache = namesystem.getRetryCache();
     this.metrics = NameNode.getNameNodeMetrics();
-    
+
     int handlerCount = 
       conf.getInt(DFS_NAMENODE_HANDLER_COUNT_KEY, 
                   DFS_NAMENODE_HANDLER_COUNT_DEFAULT);
@@ -490,6 +492,10 @@ public class NameNodeRpcServer implements NamenodeProtocols {
         DFSConfigKeys.DFS_NAMENODE_MIN_SUPPORTED_DATANODE_VERSION_KEY,
         DFSConfigKeys.DFS_NAMENODE_MIN_SUPPORTED_DATANODE_VERSION_DEFAULT);
 
+    defaultECPolicyName = conf.get(
+        DFSConfigKeys.DFS_NAMENODE_EC_SYSTEM_DEFAULT_POLICY,
+        DFSConfigKeys.DFS_NAMENODE_EC_SYSTEM_DEFAULT_POLICY_DEFAULT);
+
     // Set terse exception whose stack trace won't be logged
     clientRpcServer.addTerseExceptions(SafeModeException.class,
         FileNotFoundException.class,
@@ -2055,6 +2061,12 @@ public class NameNodeRpcServer implements NamenodeProtocols {
     }
     boolean success = false;
     try {
+      if (ecPolicyName == null) {
+        ecPolicyName = defaultECPolicyName;
+        LOG.trace("No policy name is specified, " +
+            "set the default policy name instead");
+      }
+      LOG.trace("Set erasure coding policy " + ecPolicyName + " on " + src);
       namesystem.setErasureCodingPolicy(src, ecPolicyName, cacheEntry != null);
       success = true;
     } finally {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a53b8b6f/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/ECAdmin.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/ECAdmin.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/ECAdmin.java
index 5006b5a..46600a0 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/ECAdmin.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/ECAdmin.java
@@ -335,11 +335,6 @@ public class ECAdmin extends Configured implements Tool {
 
       final String ecPolicyName = StringUtils.popOptionWithArgument("-policy",
           args);
-      if (ecPolicyName == null) {
-        System.err.println("Please specify the policy name.\nUsage: " +
-            getLongUsage());
-        return 1;
-      }
 
       if (args.size() > 0) {
         System.err.println(getName() + ": Too many arguments");
@@ -350,8 +345,13 @@ public class ECAdmin extends Configured implements Tool {
       final DistributedFileSystem dfs = AdminHelper.getDFS(p.toUri(), conf);
       try {
         dfs.setErasureCodingPolicy(p, ecPolicyName);
-        System.out.println("Set erasure coding policy " + ecPolicyName +
-            " on " + path);
+        if (ecPolicyName == null){
+          System.out.println("Set default erasure coding policy" +
+              " on " + path);
+        } else {
+          System.out.println("Set erasure coding policy " + ecPolicyName +
+              " on " + path);
+        }
       } catch (Exception e) {
         System.err.println(AdminHelper.prettifyException(e));
         return 2;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a53b8b6f/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml b/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
index bb62359..4942967 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
@@ -2976,6 +2976,14 @@
 </property>
 
 <property>
+  <name>dfs.namenode.ec.system.default.policy</name>
+  <value>RS-6-3-64k</value>
+  <description>The default erasure coding policy name will be used
+    on the path if no policy name is passed.
+  </description>
+</property>
+
+<property>
   <name>dfs.namenode.ec.policies.max.cellsize</name>
   <value>4194304</value>
   <description>The maximum cell size of erasure coding policy. Default is 4MB.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a53b8b6f/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSErasureCoding.md
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSErasureCoding.md b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSErasureCoding.md
index 88293ba..4a48c2a 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSErasureCoding.md
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSErasureCoding.md
@@ -117,6 +117,11 @@ Deployment
   be more appropriate. If the administrator only cares about node-level fault-tolerance, `RS-10-4-64k` would still be appropriate as long as
   there are at least 14 DataNodes in the cluster.
 
+  A system default EC policy can be configured via 'dfs.namenode.ec.system.default.policy' configuration. With this configuration,
+  the default EC policy will be used when no policy name is passed as an argument in the '-setPolicy' command.
+
+  By default, the 'dfs.namenode.ec.system.default.policy' is "RS-6-3-64k".
+
   The codec implementations for Reed-Solomon and XOR can be configured with the following client and DataNode configuration keys:
   `io.erasurecode.codec.rs.rawcoders` for the default RS codec,
   `io.erasurecode.codec.rs-legacy.rawcoders` for the legacy RS codec,
@@ -167,6 +172,9 @@ Below are the details about each command.
       `path`: An directory in HDFS. This is a mandatory parameter. Setting a policy only affects newly created files, and does not affect existing files.
 
       `policyName`: The erasure coding policy to be used for files under this directory.
+      This parameter can be omitted if a 'dfs.namenode.ec.system.default.policy' configuration is set.
+      The EC policy of the path will be set with the default value in configuration.
+
 
  *  `[-getPolicy -path <path>]`
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a53b8b6f/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestErasureCodingPolicies.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestErasureCodingPolicies.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestErasureCodingPolicies.java
index 127dad1..06edb1a 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestErasureCodingPolicies.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestErasureCodingPolicies.java
@@ -209,9 +209,9 @@ public class TestErasureCodingPolicies {
     cluster.restartNameNodes();
     cluster.waitActive();
 
-    // No policies should be enabled after restart
-    Assert.assertTrue("No policies should be enabled after restart",
-        fs.getAllErasureCodingPolicies().isEmpty());
+    // Only default policy should be enabled after restart
+    Assert.assertEquals("Only default policy should be enabled after restart",
+        1, fs.getAllErasureCodingPolicies().size());
 
     // Already set directory-level policies should still be in effect
     Path disabledPolicy = new Path(dir1, "afterDisabled");
@@ -360,6 +360,24 @@ public class TestErasureCodingPolicies {
   }
 
   @Test
+  public void testSetDefaultPolicy()
+          throws IOException {
+    String src = "/ecDir";
+    final Path ecDir = new Path(src);
+    try {
+      fs.mkdir(ecDir, FsPermission.getDirDefault());
+      fs.getClient().setErasureCodingPolicy(src, null);
+      String actualECPolicyName = fs.getClient().
+          getErasureCodingPolicy(src).getName();
+      String expectedECPolicyName =
+          conf.get(DFSConfigKeys.DFS_NAMENODE_EC_SYSTEM_DEFAULT_POLICY,
+          DFSConfigKeys.DFS_NAMENODE_EC_SYSTEM_DEFAULT_POLICY_DEFAULT);
+      assertEquals(expectedECPolicyName, actualECPolicyName);
+    } catch (Exception e) {
+    }
+  }
+
+  @Test
   public void testGetAllErasureCodingPolicies() throws Exception {
     Collection<ErasureCodingPolicy> allECPolicies = fs
         .getAllErasureCodingPolicies();

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a53b8b6f/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEnabledECPolicies.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEnabledECPolicies.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEnabledECPolicies.java
index fe95734..d769f8b 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEnabledECPolicies.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEnabledECPolicies.java
@@ -75,7 +75,7 @@ public class TestEnabledECPolicies {
     String defaultECPolicies = conf.get(
         DFSConfigKeys.DFS_NAMENODE_EC_POLICIES_ENABLED_KEY,
         DFSConfigKeys.DFS_NAMENODE_EC_POLICIES_ENABLED_DEFAULT);
-    expectValidPolicy(defaultECPolicies, 0);
+    expectValidPolicy(defaultECPolicies, 1);
   }
 
   @Test
@@ -98,10 +98,10 @@ public class TestEnabledECPolicies {
     String ecPolicyName = StripedFileTestUtil.getDefaultECPolicy().getName();
     expectValidPolicy(ecPolicyName, 1);
     expectValidPolicy(ecPolicyName + ", ", 1);
-    expectValidPolicy(",", 0);
+    expectValidPolicy(",", 1);
     expectValidPolicy(", " + ecPolicyName, 1);
-    expectValidPolicy(" ", 0);
-    expectValidPolicy(" , ", 0);
+    expectValidPolicy(" ", 1);
+    expectValidPolicy(" , ", 1);
   }
 
   @Test
@@ -147,7 +147,7 @@ public class TestEnabledECPolicies {
       Assert.assertTrue("Did not find specified EC policy " + p.getName(),
           found.contains(p.getName()));
     }
-    Assert.assertEquals(enabledPolicies.length, found.size());
+    Assert.assertEquals(enabledPolicies.length, found.size()-1);
     // Check that getEnabledPolicyByName only returns enabled policies
     for (ErasureCodingPolicy p: SystemErasureCodingPolicies.getPolicies()) {
       if (found.contains(p.getName())) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a53b8b6f/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testErasureCodingConf.xml
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testErasureCodingConf.xml b/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testErasureCodingConf.xml
index 127effc..c68c6d6 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testErasureCodingConf.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testErasureCodingConf.xml
@@ -553,6 +553,41 @@
     </test>
 
     <test>
+      <description>setPolicy : set erasure coding policy without given a specific policy name</description>
+      <test-commands>
+        <command>-fs NAMENODE -mkdir /ecdir</command>
+        <ec-admin-command>-fs NAMENODE -setPolicy -path /ecdir</ec-admin-command>
+      </test-commands>
+      <cleanup-commands>
+        <command>-fs NAMENODE -rmdir /ecdir</command>
+      </cleanup-commands>
+      <comparators>
+        <comparator>
+          <type>SubstringComparator</type>
+          <expected-output>Set default erasure coding policy on /ecdir</expected-output>
+        </comparator>
+      </comparators>
+    </test>
+
+    <test>
+      <description>getPolicy: get the default policy after setPolicy without given a specific policy name</description>
+      <test-commands>
+        <command>-fs NAMENODE -mkdir /ecdir</command>
+        <ec-admin-command>-fs NAMENODE -setPolicy -path /ecdir</ec-admin-command>
+        <ec-admin-command>-fs NAMENODE -getPolicy -path /ecdir</ec-admin-command>
+      </test-commands>
+      <cleanup-commands>
+        <command>-fs NAMENODE -rmdir /ecdir</command>
+      </cleanup-commands>
+      <comparators>
+        <comparator>
+          <type>SubstringComparator</type>
+          <expected-output>RS-6-3-64k</expected-output>
+        </comparator>
+      </comparators>
+    </test>
+
+    <test>
       <description>getPolicy : illegal parameters - path is missing</description>
       <test-commands>
         <ec-admin-command>-fs NAMENODE -getPolicy </ec-admin-command>


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[04/50] [abbrv] hadoop git commit: YARN-6811. [ATS1.5] All history logs should be kept under its own User Directory. Contributed by Rohith Sharma K S.

Posted by wa...@apache.org.
YARN-6811. [ATS1.5] All history logs should be kept under its own User Directory. Contributed by Rohith Sharma K S.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f44b349b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f44b349b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f44b349b

Branch: refs/heads/YARN-5881
Commit: f44b349b813508f0f6d99ca10bddba683dedf6c4
Parents: bbc6d25
Author: Junping Du <ju...@apache.org>
Authored: Fri Aug 4 16:03:56 2017 -0700
Committer: Junping Du <ju...@apache.org>
Committed: Fri Aug 4 16:03:56 2017 -0700

----------------------------------------------------------------------
 .../hadoop/yarn/conf/YarnConfiguration.java     |  4 +
 .../api/impl/FileSystemTimelineWriter.java      | 40 ++++++--
 .../src/main/resources/yarn-default.xml         | 10 ++
 .../api/impl/TestTimelineClientForATS1_5.java   | 81 ++++++++++++----
 .../timeline/EntityGroupFSTimelineStore.java    | 23 ++++-
 .../TestEntityGroupFSTimelineStore.java         | 99 ++++++++++++++++++--
 6 files changed, 224 insertions(+), 33 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f44b349b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
index d608df8..71a7134 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
@@ -2069,6 +2069,10 @@ public class YarnConfiguration extends Configuration {
       = TIMELINE_SERVICE_PREFIX
       + "entity-file.fs-support-append";
 
+  public static final String
+      TIMELINE_SERVICE_ENTITYGROUP_FS_STORE_WITH_USER_DIR =
+      TIMELINE_SERVICE_ENTITYGROUP_FS_STORE_PREFIX + "with-user-dir";
+
   /**
    * Settings for timeline service v2.0
    */

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f44b349b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/impl/FileSystemTimelineWriter.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/impl/FileSystemTimelineWriter.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/impl/FileSystemTimelineWriter.java
index fc3385b..b7bb48e 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/impl/FileSystemTimelineWriter.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/impl/FileSystemTimelineWriter.java
@@ -145,9 +145,12 @@ public class FileSystemTimelineWriter extends TimelineWriter{
         new LogFDsCache(flushIntervalSecs, cleanIntervalSecs, ttl,
             timerTaskTTL);
 
-    this.isAppendSupported =
-        conf.getBoolean(
-            YarnConfiguration.TIMELINE_SERVICE_ENTITYFILE_FS_SUPPORT_APPEND, true);
+    this.isAppendSupported = conf.getBoolean(
+        YarnConfiguration.TIMELINE_SERVICE_ENTITYFILE_FS_SUPPORT_APPEND, true);
+
+    boolean storeInsideUserDir = conf.getBoolean(
+        YarnConfiguration.TIMELINE_SERVICE_ENTITYGROUP_FS_STORE_WITH_USER_DIR,
+        false);
 
     objMapper = createObjectMapper();
 
@@ -157,8 +160,8 @@ public class FileSystemTimelineWriter extends TimelineWriter{
         YarnConfiguration
             .DEFAULT_TIMELINE_SERVICE_CLIENT_INTERNAL_ATTEMPT_DIR_CACHE_SIZE);
 
-    attemptDirCache =
-        new AttemptDirCache(attemptDirCacheSize, fs, activePath);
+    attemptDirCache = new AttemptDirCache(attemptDirCacheSize, fs, activePath,
+        authUgi, storeInsideUserDir);
 
     if (LOG.isDebugEnabled()) {
       StringBuilder debugMSG = new StringBuilder();
@@ -171,6 +174,8 @@ public class FileSystemTimelineWriter extends TimelineWriter{
               + "=" + ttl + ", " +
           YarnConfiguration.TIMELINE_SERVICE_ENTITYFILE_FS_SUPPORT_APPEND
               + "=" + isAppendSupported + ", " +
+          YarnConfiguration.TIMELINE_SERVICE_ENTITYGROUP_FS_STORE_WITH_USER_DIR
+              + "=" + storeInsideUserDir + ", " +
           YarnConfiguration.TIMELINE_SERVICE_ENTITYGROUP_FS_STORE_ACTIVE_DIR
               + "=" + activePath);
 
@@ -946,8 +951,11 @@ public class FileSystemTimelineWriter extends TimelineWriter{
     private final Map<ApplicationAttemptId, Path> attemptDirCache;
     private final FileSystem fs;
     private final Path activePath;
+    private final UserGroupInformation authUgi;
+    private final boolean storeInsideUserDir;
 
-    public AttemptDirCache(int cacheSize, FileSystem fs, Path activePath) {
+    public AttemptDirCache(int cacheSize, FileSystem fs, Path activePath,
+        UserGroupInformation ugi, boolean storeInsideUserDir) {
       this.attemptDirCacheSize = cacheSize;
       this.attemptDirCache =
           new LinkedHashMap<ApplicationAttemptId, Path>(
@@ -961,6 +969,8 @@ public class FileSystemTimelineWriter extends TimelineWriter{
           };
       this.fs = fs;
       this.activePath = activePath;
+      this.authUgi = ugi;
+      this.storeInsideUserDir = storeInsideUserDir;
     }
 
     public Path getAppAttemptDir(ApplicationAttemptId attemptId)
@@ -993,8 +1003,8 @@ public class FileSystemTimelineWriter extends TimelineWriter{
     }
 
     private Path createApplicationDir(ApplicationId appId) throws IOException {
-      Path appDir =
-          new Path(activePath, appId.toString());
+      Path appRootDir = getAppRootDir(authUgi.getShortUserName());
+      Path appDir = new Path(appRootDir, appId.toString());
       if (FileSystem.mkdirs(fs, appDir,
           new FsPermission(APP_LOG_DIR_PERMISSIONS))) {
         if (LOG.isDebugEnabled()) {
@@ -1003,5 +1013,19 @@ public class FileSystemTimelineWriter extends TimelineWriter{
       }
       return appDir;
     }
+
+    private Path getAppRootDir(String user) throws IOException {
+      if (!storeInsideUserDir) {
+        return activePath;
+      }
+      Path userDir = new Path(activePath, user);
+      if (FileSystem.mkdirs(fs, userDir,
+          new FsPermission(APP_LOG_DIR_PERMISSIONS))) {
+        if (LOG.isDebugEnabled()) {
+          LOG.debug("New user directory created - " + userDir);
+        }
+      }
+      return userDir;
+    }
   }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f44b349b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
index 564a451..95b8a88 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
@@ -3244,4 +3244,14 @@
     <value>0.0.0.0:8091</value>
   </property>
 
+  <property>
+    <description>
+       It is TimelineClient 1.5 configuration whether to store active
+       application’s timeline data with in user directory i.e
+       ${yarn.timeline-service.entity-group-fs-store.active-dir}/${user.name}
+    </description>
+    <name>yarn.timeline-service.entity-group-fs-store.with-user-dir</name>
+    <value>false</value>
+  </property>
+
 </configuration>

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f44b349b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestTimelineClientForATS1_5.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestTimelineClientForATS1_5.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestTimelineClientForATS1_5.java
index d3826e1..8573033 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestTimelineClientForATS1_5.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestTimelineClientForATS1_5.java
@@ -59,25 +59,30 @@ public class TestTimelineClientForATS1_5 {
   private static FileContext localFS;
   private static File localActiveDir;
   private TimelineWriter spyTimelineWriter;
+  private UserGroupInformation authUgi;
 
   @Before
   public void setup() throws Exception {
     localFS = FileContext.getLocalFSFileContext();
     localActiveDir =
         new File("target", this.getClass().getSimpleName() + "-activeDir")
-          .getAbsoluteFile();
+            .getAbsoluteFile();
     localFS.delete(new Path(localActiveDir.getAbsolutePath()), true);
     localActiveDir.mkdir();
     LOG.info("Created activeDir in " + localActiveDir.getAbsolutePath());
+    authUgi = UserGroupInformation.getCurrentUser();
+  }
+
+  private YarnConfiguration getConfigurations() {
     YarnConfiguration conf = new YarnConfiguration();
     conf.setBoolean(YarnConfiguration.TIMELINE_SERVICE_ENABLED, true);
     conf.setFloat(YarnConfiguration.TIMELINE_SERVICE_VERSION, 1.5f);
     conf.set(YarnConfiguration.TIMELINE_SERVICE_ENTITYGROUP_FS_STORE_ACTIVE_DIR,
-      localActiveDir.getAbsolutePath());
+        localActiveDir.getAbsolutePath());
     conf.set(
-      YarnConfiguration.TIMELINE_SERVICE_ENTITYGROUP_FS_STORE_SUMMARY_ENTITY_TYPES,
-      "summary_type");
-    client = createTimelineClient(conf);
+        YarnConfiguration.TIMELINE_SERVICE_ENTITYGROUP_FS_STORE_SUMMARY_ENTITY_TYPES,
+        "summary_type");
+    return conf;
   }
 
   @After
@@ -90,6 +95,21 @@ public class TestTimelineClientForATS1_5 {
 
   @Test
   public void testPostEntities() throws Exception {
+    client = createTimelineClient(getConfigurations());
+    verifyForPostEntities(false);
+  }
+
+  @Test
+  public void testPostEntitiesToKeepUnderUserDir() throws Exception {
+    YarnConfiguration conf = getConfigurations();
+    conf.setBoolean(
+        YarnConfiguration.TIMELINE_SERVICE_ENTITYGROUP_FS_STORE_WITH_USER_DIR,
+        true);
+    client = createTimelineClient(conf);
+    verifyForPostEntities(true);
+  }
+
+  private void verifyForPostEntities(boolean storeInsideUserDir) {
     ApplicationId appId =
         ApplicationId.newInstance(System.currentTimeMillis(), 1);
     TimelineEntityGroupId groupId =
@@ -118,7 +138,8 @@ public class TestTimelineClientForATS1_5 {
       entityTDB[0] = entities[0];
       verify(spyTimelineWriter, times(1)).putEntities(entityTDB);
       Assert.assertTrue(localFS.util().exists(
-        new Path(getAppAttemptDir(attemptId1), "summarylog-"
+          new Path(getAppAttemptDir(attemptId1, storeInsideUserDir),
+              "summarylog-"
             + attemptId1.toString())));
       reset(spyTimelineWriter);
 
@@ -132,13 +153,16 @@ public class TestTimelineClientForATS1_5 {
       verify(spyTimelineWriter, times(0)).putEntities(
         any(TimelineEntity[].class));
       Assert.assertTrue(localFS.util().exists(
-        new Path(getAppAttemptDir(attemptId2), "summarylog-"
+          new Path(getAppAttemptDir(attemptId2, storeInsideUserDir),
+              "summarylog-"
             + attemptId2.toString())));
       Assert.assertTrue(localFS.util().exists(
-        new Path(getAppAttemptDir(attemptId2), "entitylog-"
+          new Path(getAppAttemptDir(attemptId2, storeInsideUserDir),
+              "entitylog-"
             + groupId.toString())));
       Assert.assertTrue(localFS.util().exists(
-        new Path(getAppAttemptDir(attemptId2), "entitylog-"
+          new Path(getAppAttemptDir(attemptId2, storeInsideUserDir),
+              "entitylog-"
             + groupId2.toString())));
       reset(spyTimelineWriter);
     } catch (Exception e) {
@@ -148,6 +172,21 @@ public class TestTimelineClientForATS1_5 {
 
   @Test
   public void testPutDomain() {
+    client = createTimelineClient(getConfigurations());
+    verifyForPutDomain(false);
+  }
+
+  @Test
+  public void testPutDomainToKeepUnderUserDir() {
+    YarnConfiguration conf = getConfigurations();
+    conf.setBoolean(
+        YarnConfiguration.TIMELINE_SERVICE_ENTITYGROUP_FS_STORE_WITH_USER_DIR,
+        true);
+    client = createTimelineClient(conf);
+    verifyForPutDomain(true);
+  }
+
+  private void verifyForPutDomain(boolean storeInsideUserDir) {
     ApplicationId appId =
         ApplicationId.newInstance(System.currentTimeMillis(), 1);
     ApplicationAttemptId attemptId1 =
@@ -161,23 +200,33 @@ public class TestTimelineClientForATS1_5 {
 
       client.putDomain(attemptId1, domain);
       verify(spyTimelineWriter, times(0)).putDomain(domain);
-      Assert.assertTrue(localFS.util().exists(
-        new Path(getAppAttemptDir(attemptId1), "domainlog-"
-            + attemptId1.toString())));
+      Assert.assertTrue(localFS.util()
+          .exists(new Path(getAppAttemptDir(attemptId1, storeInsideUserDir),
+              "domainlog-" + attemptId1.toString())));
       reset(spyTimelineWriter);
     } catch (Exception e) {
       Assert.fail("Exception is not expected." + e);
     }
   }
 
-  private Path getAppAttemptDir(ApplicationAttemptId appAttemptId) {
-    Path appDir =
-        new Path(localActiveDir.getAbsolutePath(), appAttemptId
-          .getApplicationId().toString());
+  private Path getAppAttemptDir(ApplicationAttemptId appAttemptId,
+      boolean storeInsideUserDir) {
+    Path userDir = getUserDir(appAttemptId, storeInsideUserDir);
+    Path appDir = new Path(userDir, appAttemptId.getApplicationId().toString());
     Path attemptDir = new Path(appDir, appAttemptId.toString());
     return attemptDir;
   }
 
+  private Path getUserDir(ApplicationAttemptId appAttemptId,
+      boolean storeInsideUserDir) {
+    if (!storeInsideUserDir) {
+      return new Path(localActiveDir.getAbsolutePath());
+    }
+    Path userDir =
+        new Path(localActiveDir.getAbsolutePath(), authUgi.getShortUserName());
+    return userDir;
+  }
+
   private static TimelineEntity generateEntity(String type) {
     TimelineEntity entity = new TimelineEntity();
     entity.setEntityId("entity id");

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f44b349b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timeline-pluginstorage/src/main/java/org/apache/hadoop/yarn/server/timeline/EntityGroupFSTimelineStore.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timeline-pluginstorage/src/main/java/org/apache/hadoop/yarn/server/timeline/EntityGroupFSTimelineStore.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timeline-pluginstorage/src/main/java/org/apache/hadoop/yarn/server/timeline/EntityGroupFSTimelineStore.java
index 1675a48..80baf89 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timeline-pluginstorage/src/main/java/org/apache/hadoop/yarn/server/timeline/EntityGroupFSTimelineStore.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timeline-pluginstorage/src/main/java/org/apache/hadoop/yarn/server/timeline/EntityGroupFSTimelineStore.java
@@ -356,7 +356,13 @@ public class EntityGroupFSTimelineStore extends CompositeService
   @VisibleForTesting
   int scanActiveLogs() throws IOException {
     long startTime = Time.monotonicNow();
-    RemoteIterator<FileStatus> iter = list(activeRootPath);
+    int logsToScanCount = scanActiveLogs(activeRootPath);
+    metrics.addActiveLogDirScanTime(Time.monotonicNow() - startTime);
+    return logsToScanCount;
+  }
+
+  int scanActiveLogs(Path dir) throws IOException {
+    RemoteIterator<FileStatus> iter = list(dir);
     int logsToScanCount = 0;
     while (iter.hasNext()) {
       FileStatus stat = iter.next();
@@ -368,10 +374,9 @@ public class EntityGroupFSTimelineStore extends CompositeService
         AppLogs logs = getAndSetActiveLog(appId, stat.getPath());
         executor.execute(new ActiveLogParser(logs));
       } else {
-        LOG.debug("Unable to parse entry {}", name);
+        logsToScanCount += scanActiveLogs(stat.getPath());
       }
     }
-    metrics.addActiveLogDirScanTime(Time.monotonicNow() - startTime);
     return logsToScanCount;
   }
 
@@ -418,6 +423,18 @@ public class EntityGroupFSTimelineStore extends CompositeService
         appDirPath = getActiveAppPath(applicationId);
         if (fs.exists(appDirPath)) {
           appState = AppState.ACTIVE;
+        } else {
+          // check for user directory inside active path
+          RemoteIterator<FileStatus> iter = list(activeRootPath);
+          while (iter.hasNext()) {
+            Path child = new Path(iter.next().getPath().getName(),
+                applicationId.toString());
+            appDirPath = new Path(activeRootPath, child);
+            if (fs.exists(appDirPath)) {
+              appState = AppState.ACTIVE;
+              break;
+            }
+          }
         }
       }
       if (appState != AppState.UNKNOWN) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f44b349b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timeline-pluginstorage/src/test/java/org/apache/hadoop/yarn/server/timeline/TestEntityGroupFSTimelineStore.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timeline-pluginstorage/src/test/java/org/apache/hadoop/yarn/server/timeline/TestEntityGroupFSTimelineStore.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timeline-pluginstorage/src/test/java/org/apache/hadoop/yarn/server/timeline/TestEntityGroupFSTimelineStore.java
index 8540d45..0458722 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timeline-pluginstorage/src/test/java/org/apache/hadoop/yarn/server/timeline/TestEntityGroupFSTimelineStore.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timeline-pluginstorage/src/test/java/org/apache/hadoop/yarn/server/timeline/TestEntityGroupFSTimelineStore.java
@@ -37,6 +37,8 @@ import org.apache.hadoop.yarn.api.records.ApplicationId;
 import org.apache.hadoop.yarn.api.records.timeline.TimelineEntities;
 import org.apache.hadoop.yarn.api.records.timeline.TimelineEntity;
 import org.apache.hadoop.yarn.conf.YarnConfiguration;
+import org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.AppState;
+import org.apache.hadoop.yarn.server.timeline.TimelineReader.Field;
 import org.apache.hadoop.yarn.util.ConverterUtils;
 import org.junit.After;
 import org.junit.AfterClass;
@@ -58,7 +60,6 @@ import java.util.List;
 import java.util.Map;
 import java.util.Set;
 
-import static org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.AppState;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertNotEquals;
@@ -91,6 +92,7 @@ public class TestEntityGroupFSTimelineStore extends TimelineStoreTestUtils {
   private static ApplicationId mainTestAppId;
   private static Path mainTestAppDirPath;
   private static Path testDoneDirPath;
+  private static Path testActiveDirPath;
   private static String mainEntityLogFileName;
 
   private EntityGroupFSTimelineStore store;
@@ -125,23 +127,28 @@ public class TestEntityGroupFSTimelineStore extends TimelineStoreTestUtils {
               + i);
       sampleAppIds.add(appId);
     }
+    testActiveDirPath = getTestRootPath("active");
     // Among all sample applicationIds, choose the first one for most of the
     // tests.
     mainTestAppId = sampleAppIds.get(0);
-    mainTestAppDirPath = getTestRootPath(mainTestAppId.toString());
+    mainTestAppDirPath = new Path(testActiveDirPath, mainTestAppId.toString());
     mainEntityLogFileName = EntityGroupFSTimelineStore.ENTITY_LOG_PREFIX
           + EntityGroupPlugInForTest.getStandardTimelineGroupId(mainTestAppId);
 
     testDoneDirPath = getTestRootPath("done");
     config.set(YarnConfiguration.TIMELINE_SERVICE_ENTITYGROUP_FS_STORE_DONE_DIR,
         testDoneDirPath.toString());
+    config.set(
+        YarnConfiguration.TIMELINE_SERVICE_ENTITYGROUP_FS_STORE_ACTIVE_DIR,
+        testActiveDirPath.toString());
   }
 
   @Before
   public void setup() throws Exception {
     for (ApplicationId appId : sampleAppIds) {
-      Path attemotDirPath = new Path(getTestRootPath(appId.toString()),
-          getAttemptDirName(appId));
+      Path attemotDirPath =
+          new Path(new Path(testActiveDirPath, appId.toString()),
+              getAttemptDirName(appId));
       createTestFiles(appId, attemotDirPath);
     }
 
@@ -178,7 +185,7 @@ public class TestEntityGroupFSTimelineStore extends TimelineStoreTestUtils {
   public void tearDown() throws Exception {
     store.stop();
     for (ApplicationId appId : sampleAppIds) {
-      fs.delete(getTestRootPath(appId.toString()), true);
+      fs.delete(new Path(testActiveDirPath,appId.toString()), true);
     }
     if (testJar != null) {
       testJar.delete();
@@ -414,8 +421,88 @@ public class TestEntityGroupFSTimelineStore extends TimelineStoreTestUtils {
 
   }
 
+  @Test
+  public void testGetEntityPluginRead() throws Exception {
+    EntityGroupFSTimelineStore store = null;
+    ApplicationId appId =
+        ApplicationId.fromString("application_1501509265053_0001");
+    String user = UserGroupInformation.getCurrentUser().getShortUserName();
+    Path userBase = new Path(testActiveDirPath, user);
+    Path userAppRoot = new Path(userBase, appId.toString());
+    Path attemotDirPath = new Path(userAppRoot, getAttemptDirName(appId));
+
+    try {
+      store = createAndStartTimelineStore(AppState.ACTIVE);
+      String logFileName = EntityGroupFSTimelineStore.ENTITY_LOG_PREFIX
+          + EntityGroupPlugInForTest.getStandardTimelineGroupId(appId);
+      createTestFiles(appId, attemotDirPath, logFileName);
+      TimelineEntity entity = store.getEntity(entityNew.getEntityId(),
+          entityNew.getEntityType(), EnumSet.allOf(Field.class));
+      assertNotNull(entity);
+      assertEquals(entityNew.getEntityId(), entity.getEntityId());
+      assertEquals(entityNew.getEntityType(), entity.getEntityType());
+    } finally {
+      if (store != null) {
+        store.stop();
+      }
+      fs.delete(userBase, true);
+    }
+  }
+
+  @Test
+  public void testScanActiveLogsAndMoveToDonePluginRead() throws Exception {
+    EntityGroupFSTimelineStore store = null;
+    ApplicationId appId =
+        ApplicationId.fromString("application_1501509265053_0002");
+    String user = UserGroupInformation.getCurrentUser().getShortUserName();
+    Path userBase = new Path(testActiveDirPath, user);
+    Path userAppRoot = new Path(userBase, appId.toString());
+    Path attemotDirPath = new Path(userAppRoot, getAttemptDirName(appId));
+
+    try {
+      store = createAndStartTimelineStore(AppState.COMPLETED);
+      String logFileName = EntityGroupFSTimelineStore.ENTITY_LOG_PREFIX
+          + EntityGroupPlugInForTest.getStandardTimelineGroupId(appId);
+      createTestFiles(appId, attemotDirPath, logFileName);
+      store.scanActiveLogs();
+
+      TimelineEntity entity = store.getEntity(entityNew.getEntityId(),
+          entityNew.getEntityType(), EnumSet.allOf(Field.class));
+      assertNotNull(entity);
+      assertEquals(entityNew.getEntityId(), entity.getEntityId());
+      assertEquals(entityNew.getEntityType(), entity.getEntityType());
+    } finally {
+      if (store != null) {
+        store.stop();
+      }
+      fs.delete(userBase, true);
+    }
+  }
+
+  private EntityGroupFSTimelineStore createAndStartTimelineStore(
+      AppState appstate) {
+    // stop before creating new store to get the lock
+    store.stop();
+    
+    EntityGroupFSTimelineStore newStore = new EntityGroupFSTimelineStore() {
+      @Override
+      protected AppState getAppState(ApplicationId appId) throws IOException {
+        return appstate;
+      }
+    };
+    newStore.init(config);
+    newStore.setFs(fs);
+    newStore.start();
+    return newStore;
+  }
+
   private void createTestFiles(ApplicationId appId, Path attemptDirPath)
       throws IOException {
+    createTestFiles(appId, attemptDirPath, mainEntityLogFileName);
+  }
+
+  private void createTestFiles(ApplicationId appId, Path attemptDirPath,
+      String logPath) throws IOException {
     TimelineEntities entities = PluginStoreTestUtils.generateTestEntities();
     PluginStoreTestUtils.writeEntities(entities,
         new Path(attemptDirPath, TEST_SUMMARY_LOG_FILE_NAME), fs);
@@ -429,7 +516,7 @@ public class TestEntityGroupFSTimelineStore extends TimelineStoreTestUtils {
     TimelineEntities entityList = new TimelineEntities();
     entityList.addEntity(entityNew);
     PluginStoreTestUtils.writeEntities(entityList,
-        new Path(attemptDirPath, mainEntityLogFileName), fs);
+        new Path(attemptDirPath, logPath), fs);
 
     FSDataOutputStream out = fs.create(
         new Path(attemptDirPath, TEST_DOMAIN_LOG_FILE_NAME));


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[36/50] [abbrv] hadoop git commit: YARN-6033. Add support for sections in container-executor configuration file. (Varun Vasudev via wandga)

Posted by wa...@apache.org.
http://git-wip-us.apache.org/repos/asf/hadoop/blob/ec694145/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test_configuration.cc
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test_configuration.cc b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test_configuration.cc
new file mode 100644
index 0000000..6ee0ab2
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test_configuration.cc
@@ -0,0 +1,432 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include <gtest/gtest.h>
+#include <fstream>
+
+extern "C" {
+#include "util.h"
+#include "configuration.h"
+#include "configuration.c"
+}
+
+
+namespace ContainerExecutor {
+  class TestConfiguration : public ::testing::Test {
+  protected:
+    virtual void SetUp() {
+      new_config_format_file = "test-configurations/configuration-1.cfg";
+      old_config_format_file = "test-configurations/old-config.cfg";
+      mixed_config_format_file = "test-configurations/configuration-2.cfg";
+      loadConfigurations();
+      return;
+    }
+
+    void loadConfigurations() {
+      int ret = 0;
+      ret = read_config(new_config_format_file.c_str(), &new_config_format);
+      ASSERT_EQ(0, ret);
+      ret = read_config(old_config_format_file.c_str(), &old_config_format);
+      ASSERT_EQ(0, ret);
+      ret = read_config(mixed_config_format_file.c_str(),
+                        &mixed_config_format);
+      ASSERT_EQ(0, ret);
+    }
+
+    virtual void TearDown() {
+      free_configuration(&new_config_format);
+      free_configuration(&old_config_format);
+      return;
+    }
+
+    std::string new_config_format_file;
+    std::string old_config_format_file;
+    std::string mixed_config_format_file;
+    struct configuration new_config_format;
+    struct configuration old_config_format;
+    struct configuration mixed_config_format;
+  };
+
+
+  TEST_F(TestConfiguration, test_get_configuration_values_delimiter) {
+    char **split_values;
+    split_values = get_configuration_values_delimiter(NULL, "", &old_config_format, "%");
+    ASSERT_EQ(NULL, split_values);
+    split_values = get_configuration_values_delimiter("yarn.local.dirs", NULL,
+                      &old_config_format, "%");
+    ASSERT_EQ(NULL, split_values);
+    split_values = get_configuration_values_delimiter("yarn.local.dirs", "",
+                      NULL, "%");
+    ASSERT_EQ(NULL, split_values);
+    split_values = get_configuration_values_delimiter("yarn.local.dirs", "",
+                      &old_config_format, NULL);
+    ASSERT_EQ(NULL, split_values);
+    split_values = get_configuration_values_delimiter("yarn.local.dirs", "abcd",
+                                                      &old_config_format, "%");
+    ASSERT_EQ(NULL, split_values);
+    split_values = get_configuration_values_delimiter("yarn.local.dirs", "",
+                      &old_config_format, "%");
+    ASSERT_STREQ("/var/run/yarn", split_values[0]);
+    ASSERT_STREQ("/tmp/mydir", split_values[1]);
+    ASSERT_EQ(NULL, split_values[2]);
+    free(split_values);
+    split_values = get_configuration_values_delimiter("allowed.system.users",
+                      "", &old_config_format, "%");
+    ASSERT_STREQ("nobody,daemon", split_values[0]);
+    ASSERT_EQ(NULL, split_values[1]);
+    free(split_values);
+  }
+
+  TEST_F(TestConfiguration, test_get_configuration_values) {
+    char **split_values;
+    split_values = get_configuration_values(NULL, "", &old_config_format);
+    ASSERT_EQ(NULL, split_values);
+    split_values = get_configuration_values("yarn.local.dirs", NULL, &old_config_format);
+    ASSERT_EQ(NULL, split_values);
+    split_values = get_configuration_values("yarn.local.dirs", "", NULL);
+    ASSERT_EQ(NULL, split_values);
+    split_values = get_configuration_values("yarn.local.dirs", "abcd", &old_config_format);
+    ASSERT_EQ(NULL, split_values);
+    split_values = get_configuration_values("yarn.local.dirs", "", &old_config_format);
+    ASSERT_STREQ("/var/run/yarn%/tmp/mydir", split_values[0]);
+    ASSERT_EQ(NULL, split_values[1]);
+    free(split_values);
+    split_values = get_configuration_values("allowed.system.users", "",
+                      &old_config_format);
+    ASSERT_STREQ("nobody", split_values[0]);
+    ASSERT_STREQ("daemon", split_values[1]);
+    ASSERT_EQ(NULL, split_values[2]);
+    free(split_values);
+  }
+
+  TEST_F(TestConfiguration, test_get_configuration_value) {
+    std::string key_value_array[5][2] = {
+        {"yarn.nodemanager.linux-container-executor.group", "yarn"},
+        {"min.user.id", "1000"},
+        {"allowed.system.users", "nobody,daemon"},
+        {"feature.docker.enabled", "1"},
+        {"yarn.local.dirs", "/var/run/yarn%/tmp/mydir"}
+    };
+    char *value;
+    value = get_configuration_value(NULL, "", &old_config_format);
+    ASSERT_EQ(NULL, value);
+    value = get_configuration_value("yarn.local.dirs", NULL, &old_config_format);
+    ASSERT_EQ(NULL, value);
+    value = get_configuration_value("yarn.local.dirs", "", NULL);
+    ASSERT_EQ(NULL, value);
+
+    for (int i = 0; i < 5; ++i) {
+      value = get_configuration_value(key_value_array[i][0].c_str(),
+                "", &old_config_format);
+      ASSERT_STREQ(key_value_array[i][1].c_str(), value);
+      free(value);
+    }
+    value = get_configuration_value("test.key", "", &old_config_format);
+    ASSERT_EQ(NULL, value);
+    value = get_configuration_value("test.key2", "", &old_config_format);
+    ASSERT_EQ(NULL, value);
+    value = get_configuration_value("feature.tc.enabled", "abcd", &old_config_format);
+    ASSERT_EQ(NULL, value);
+  }
+
+  TEST_F(TestConfiguration, test_no_sections_format) {
+    const struct section *executor_cfg = get_configuration_section("", &old_config_format);
+    char *value = NULL;
+    value = get_section_value("yarn.nodemanager.linux-container-executor.group", executor_cfg);
+    ASSERT_STREQ("yarn", value);
+    value = get_section_value("feature.docker.enabled", executor_cfg);
+    ASSERT_STREQ("1", value);
+    value = get_section_value("feature.tc.enabled", executor_cfg);
+    ASSERT_STREQ("0", value);
+    value = get_section_value("min.user.id", executor_cfg);
+    ASSERT_STREQ("1000", value);
+    value = get_section_value("docker.binary", executor_cfg);
+    ASSERT_STREQ("/usr/bin/docker", value);
+    char **list = get_section_values("allowed.system.users", executor_cfg);
+    ASSERT_STREQ("nobody", list[0]);
+    ASSERT_STREQ("daemon", list[1]);
+    list = get_section_values("banned.users", executor_cfg);
+    ASSERT_STREQ("root", list[0]);
+    ASSERT_STREQ("testuser1", list[1]);
+    ASSERT_STREQ("testuser2", list[2]);
+  }
+
+  TEST_F(TestConfiguration, test_get_section_values_delimiter) {
+    const struct section *section;
+    char *value;
+    char **split_values;
+    section = get_configuration_section("section-1", &new_config_format);
+    value = get_section_value("key1", section);
+    ASSERT_STREQ("value1", value);
+    free(value);
+    value = get_section_value("key2", section);
+    ASSERT_EQ(NULL, value);
+    split_values = get_section_values_delimiter(NULL, section, "%");
+    ASSERT_EQ(NULL, split_values);
+    split_values = get_section_values_delimiter("split-key", NULL, "%");
+    ASSERT_EQ(NULL, split_values);
+    split_values = get_section_values_delimiter("split-key", section, NULL);
+    ASSERT_EQ(NULL, split_values);
+    split_values = get_section_values_delimiter("split-key", section, "%");
+    ASSERT_FALSE(split_values == NULL);
+    ASSERT_STREQ("val1,val2,val3", split_values[0]);
+    ASSERT_TRUE(split_values[1] == NULL);
+    free_values(split_values);
+    split_values = get_section_values_delimiter("perc-key", section, "%");
+    ASSERT_FALSE(split_values == NULL);
+    ASSERT_STREQ("perc-val1", split_values[0]);
+    ASSERT_STREQ("perc-val2", split_values[1]);
+    ASSERT_TRUE(split_values[2] == NULL);
+  }
+
+  TEST_F(TestConfiguration, test_get_section_values) {
+    const struct section *section;
+    char *value;
+    char **split_values;
+    section = get_configuration_section("section-1", &new_config_format);
+    value = get_section_value(NULL, section);
+    ASSERT_EQ(NULL, value);
+    value = get_section_value("key1", NULL);
+    ASSERT_EQ(NULL, value);
+    value = get_section_value("key1", section);
+    ASSERT_STREQ("value1", value);
+    free(value);
+    value = get_section_value("key2", section);
+    ASSERT_EQ(NULL, value);
+    split_values = get_section_values("split-key", section);
+    ASSERT_FALSE(split_values == NULL);
+    ASSERT_STREQ("val1", split_values[0]);
+    ASSERT_STREQ("val2", split_values[1]);
+    ASSERT_STREQ("val3", split_values[2]);
+    ASSERT_TRUE(split_values[3] == NULL);
+    free_values(split_values);
+    split_values = get_section_values("perc-key", section);
+    ASSERT_FALSE(split_values == NULL);
+    ASSERT_STREQ("perc-val1%perc-val2", split_values[0]);
+    ASSERT_TRUE(split_values[1] == NULL);
+    free_values(split_values);
+    section = get_configuration_section("section-2", &new_config_format);
+    value = get_section_value("key1", section);
+    ASSERT_STREQ("value2", value);
+    free(value);
+    value = get_section_value("key2", section);
+    ASSERT_STREQ("value2", value);
+    free(value);
+  }
+
+  TEST_F(TestConfiguration, test_split_section) {
+    const struct section *section;
+    char *value;
+    section = get_configuration_section("split-section", &new_config_format);
+    value = get_section_value(NULL, section);
+    ASSERT_EQ(NULL, value);
+    value = get_section_value("key3", NULL);
+    ASSERT_EQ(NULL, value);
+    value = get_section_value("key3", section);
+    ASSERT_STREQ("value3", value);
+    free(value);
+    value = get_section_value("key4", section);
+    ASSERT_STREQ("value4", value);
+
+  }
+
+  TEST_F(TestConfiguration, test_get_configuration_section) {
+    const struct section *section;
+    ASSERT_EQ(3, new_config_format.size);
+    section = get_configuration_section(NULL, &new_config_format);
+    ASSERT_EQ(NULL, section);
+    section = get_configuration_section("section-1", NULL);
+    ASSERT_EQ(NULL, section);
+    section = get_configuration_section("section-1", &new_config_format);
+    ASSERT_FALSE(section == NULL);
+    ASSERT_STREQ("section-1", section->name);
+    ASSERT_EQ(3, section->size);
+    ASSERT_FALSE(NULL == section->kv_pairs);
+    section = get_configuration_section("section-2", &new_config_format);
+    ASSERT_FALSE(section == NULL);
+    ASSERT_STREQ("section-2", section->name);
+    ASSERT_EQ(2, section->size);
+    ASSERT_FALSE(NULL == section->kv_pairs);
+    section = get_configuration_section("section-3", &new_config_format);
+    ASSERT_TRUE(section == NULL);
+  }
+
+  TEST_F(TestConfiguration, test_read_config) {
+    struct configuration config;
+    int ret = 0;
+
+    ret = read_config(NULL, &config);
+    ASSERT_EQ(INVALID_CONFIG_FILE, ret);
+    ret = read_config("bad-config-file", &config);
+    ASSERT_EQ(INVALID_CONFIG_FILE, ret);
+    ret = read_config(new_config_format_file.c_str(), &config);
+    ASSERT_EQ(0, ret);
+    ASSERT_EQ(3, config.size);
+    ASSERT_STREQ("section-1", config.sections[0]->name);
+    ASSERT_STREQ("split-section", config.sections[1]->name);
+    ASSERT_STREQ("section-2", config.sections[2]->name);
+    free_configuration(&config);
+    ret = read_config(old_config_format_file.c_str(), &config);
+    ASSERT_EQ(0, ret);
+    ASSERT_EQ(1, config.size);
+    ASSERT_STREQ("", config.sections[0]->name);
+    free_configuration(&config);
+  }
+
+  TEST_F(TestConfiguration, test_get_kv_key) {
+    int ret = 0;
+    char buff[1024];
+    ret = get_kv_key(NULL, buff, 1024);
+    ASSERT_EQ(-EINVAL, ret);
+    ret = get_kv_key("key1234", buff, 1024);
+    ASSERT_EQ(-EINVAL, ret);
+    ret = get_kv_key("key=abcd", NULL, 1024);
+    ASSERT_EQ(-ENAMETOOLONG, ret);
+    ret = get_kv_key("key=abcd", buff, 1);
+    ASSERT_EQ(-ENAMETOOLONG, ret);
+    ret = get_kv_key("key=abcd", buff, 1024);
+    ASSERT_EQ(0, ret);
+    ASSERT_STREQ("key", buff);
+  }
+
+  TEST_F(TestConfiguration, test_get_kv_value) {
+    int ret = 0;
+    char buff[1024];
+    ret = get_kv_value(NULL, buff, 1024);
+    ASSERT_EQ(-EINVAL, ret);
+    ret = get_kv_value("key1234", buff, 1024);
+    ASSERT_EQ(-EINVAL, ret);
+    ret = get_kv_value("key=abcd", NULL, 1024);
+    ASSERT_EQ(-ENAMETOOLONG, ret);
+    ret = get_kv_value("key=abcd", buff, 1);
+    ASSERT_EQ(-ENAMETOOLONG, ret);
+    ret = get_kv_value("key=abcd", buff, 1024);
+    ASSERT_EQ(0, ret);
+    ASSERT_STREQ("abcd", buff);
+  }
+
+  TEST_F(TestConfiguration, test_single_section_high_key_count) {
+    std::string section_name = "section-1";
+    std::string sample_file_name = "large-section.cfg";
+    std::ofstream sample_file;
+    sample_file.open(sample_file_name.c_str());
+    sample_file << "[" << section_name << "]" << std::endl;
+    for(int i = 0; i < MAX_SIZE + 2; ++i) {
+      sample_file << "key" << i << "=" << "value" << i << std::endl;
+    }
+    struct configuration cfg;
+    int ret = read_config(sample_file_name.c_str(), &cfg);
+    ASSERT_EQ(0, ret);
+    ASSERT_EQ(1, cfg.size);
+    const struct section *section1 = get_configuration_section(section_name.c_str(), &cfg);
+    ASSERT_EQ(MAX_SIZE + 2, section1->size);
+    ASSERT_STREQ(section_name.c_str(), section1->name);
+    for(int i = 0; i < MAX_SIZE + 2; ++i) {
+      std::ostringstream oss;
+      oss << "key" << i;
+      const char *value = get_section_value(oss.str().c_str(), section1);
+      oss.str("");
+      oss << "value" << i;
+      ASSERT_STREQ(oss.str().c_str(), value);
+    }
+    remove(sample_file_name.c_str());
+    free_configuration(&cfg);
+  }
+
+  TEST_F(TestConfiguration, test_multiple_sections) {
+    std::string sample_file_name = "multiple-sections.cfg";
+    std::ofstream sample_file;
+    sample_file.open(sample_file_name.c_str());
+    for(int i = 0; i < MAX_SIZE + 2; ++i) {
+      sample_file << "[section-" << i << "]" << std::endl;
+      sample_file << "key" << i << "=" << "value" << i << std::endl;
+    }
+    struct configuration cfg;
+    int ret = read_config(sample_file_name.c_str(), &cfg);
+    ASSERT_EQ(0, ret);
+    ASSERT_EQ(MAX_SIZE + 2, cfg.size);
+    for(int i = 0; i < MAX_SIZE + 2; ++i) {
+      std::ostringstream oss;
+      oss << "section-" << i;
+      const struct section *section = get_configuration_section(oss.str().c_str(), &cfg);
+      ASSERT_EQ(1, section->size);
+      ASSERT_STREQ(oss.str().c_str(), section->name);
+      oss.str("");
+      oss << "key" << i;
+      const char *value = get_section_value(oss.str().c_str(), section);
+      oss.str("");
+      oss << "value" << i;
+      ASSERT_STREQ(oss.str().c_str(), value);
+    }
+    remove(sample_file_name.c_str());
+    free_configuration(&cfg);
+  }
+
+  TEST_F(TestConfiguration, test_section_start_line) {
+    const char *section_start_line = "[abcd]";
+    const char *non_section_lines[] = {
+        "[abcd", "abcd]", "key=value", "#abcd"
+    };
+    int ret = is_section_start_line(section_start_line);
+    ASSERT_EQ(1, ret);
+    int length = sizeof(non_section_lines) / sizeof(*non_section_lines);
+    for( int i = 0; i < length; ++i) {
+      ret = is_section_start_line(non_section_lines[i]);
+      ASSERT_EQ(0, ret);
+    }
+    ret = is_section_start_line(NULL);
+    ASSERT_EQ(0, ret);
+  }
+
+  TEST_F(TestConfiguration, test_comment_line) {
+    const char *comment_line = "#[abcd]";
+    const char *non_comment_lines[] = {
+        "[abcd", "abcd]", "key=value", "[abcd]"
+    };
+    int ret = is_comment_line(comment_line);
+    ASSERT_EQ(1, ret);
+    int length = sizeof(non_comment_lines) / sizeof(*non_comment_lines);
+    for( int i = 0; i < length; ++i) {
+      ret = is_comment_line(non_comment_lines[i]);
+      ASSERT_EQ(0, ret);
+    }
+    ret = is_comment_line(NULL);
+    ASSERT_EQ(0, ret);
+  }
+
+  TEST_F(TestConfiguration, test_mixed_config_format) {
+    const struct section *executor_cfg =
+        get_configuration_section("", &mixed_config_format);
+    char *value = NULL;
+    value = get_section_value("key1", executor_cfg);
+    ASSERT_STREQ("value1", value);
+    value = get_section_value("key2", executor_cfg);
+    ASSERT_STREQ("value2", value);
+    ASSERT_EQ(2, executor_cfg->size);
+    executor_cfg = get_configuration_section("section-1",
+                                             &mixed_config_format);
+    value = get_section_value("key3", executor_cfg);
+    ASSERT_STREQ("value3", value);
+    value = get_section_value("key1", executor_cfg);
+    ASSERT_STREQ("value4", value);
+    ASSERT_EQ(2, executor_cfg->size);
+    ASSERT_EQ(2, mixed_config_format.size);
+    ASSERT_STREQ("", mixed_config_format.sections[0]->name);
+    ASSERT_STREQ("section-1", mixed_config_format.sections[1]->name);
+  }
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ec694145/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test_main.cc
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test_main.cc b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test_main.cc
new file mode 100644
index 0000000..d59a3f2
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test_main.cc
@@ -0,0 +1,29 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include <gtest/gtest.h>
+#include <main/native/container-executor/impl/util.h>
+#include <cstdio>
+
+FILE* ERRORFILE = stderr;
+FILE* LOGFILE = stdout;
+
+int main(int argc, char **argv) {
+    testing::InitGoogleTest(&argc, argv);
+    return RUN_ALL_TESTS();
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ec694145/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test_util.cc
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test_util.cc b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test_util.cc
new file mode 100644
index 0000000..2ec7b2a
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test_util.cc
@@ -0,0 +1,138 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include <gtest/gtest.h>
+#include <sstream>
+
+extern "C" {
+#include "util.h"
+}
+
+namespace ContainerExecutor {
+
+  class TestUtil : public ::testing::Test {
+  protected:
+    virtual void SetUp() {
+    }
+
+    virtual void TearDown() {
+    }
+  };
+
+  TEST_F(TestUtil, test_split_delimiter) {
+    std::string str = "1,2,3,4,5,6,7,8,9,10,11";
+    char *split_string = (char *) calloc(str.length() + 1, sizeof(char));
+    strncpy(split_string, str.c_str(), str.length());
+    char **splits = split_delimiter(split_string, ",");
+    ASSERT_TRUE(splits != NULL);
+    int count = 0;
+    while(splits[count] != NULL) {
+      ++count;
+    }
+    ASSERT_EQ(11, count);
+    for(int i = 1; i < count; ++i) {
+      std::ostringstream oss;
+      oss << i;
+      ASSERT_STREQ(oss.str().c_str(), splits[i-1]);
+    }
+    ASSERT_EQ(NULL, splits[count]);
+    free_values(splits);
+
+    split_string = (char *) calloc(str.length() + 1, sizeof(char));
+    strncpy(split_string, str.c_str(), str.length());
+    splits = split_delimiter(split_string, "%");
+    ASSERT_TRUE(splits != NULL);
+    ASSERT_TRUE(splits[1] == NULL);
+    ASSERT_STREQ(str.c_str(), splits[0]);
+    free_values(splits);
+
+    splits = split_delimiter(NULL, ",");
+    ASSERT_EQ(NULL, splits);
+    return;
+  }
+
+  TEST_F(TestUtil, test_split) {
+    std::string str = "1%2%3%4%5%6%7%8%9%10%11";
+    char *split_string = (char *) calloc(str.length() + 1, sizeof(char));
+    strncpy(split_string, str.c_str(), str.length());
+    char **splits = split(split_string);
+    int count = 0;
+    while(splits[count] != NULL) {
+      ++count;
+    }
+    ASSERT_EQ(11, count);
+    for(int i = 1; i < count; ++i) {
+      std::ostringstream oss;
+      oss << i;
+      ASSERT_STREQ(oss.str().c_str(), splits[i-1]);
+    }
+    ASSERT_EQ(NULL, splits[count]);
+    free_values(splits);
+
+    str = "1,2,3,4,5,6,7,8,9,10,11";
+    split_string = (char *) calloc(str.length() + 1, sizeof(char));
+    strncpy(split_string, str.c_str(), str.length());
+    splits = split(split_string);
+    ASSERT_TRUE(splits != NULL);
+    ASSERT_TRUE(splits[1] == NULL);
+    ASSERT_STREQ(str.c_str(), splits[0]);
+    return;
+  }
+
+  TEST_F(TestUtil, test_trim) {
+    char* trimmed = NULL;
+
+    // Check NULL input
+    ASSERT_EQ(NULL, trim(NULL));
+
+    // Check empty input
+    trimmed = trim("");
+    ASSERT_STREQ("", trimmed);
+    free(trimmed);
+
+    // Check single space input
+    trimmed = trim(" ");
+    ASSERT_STREQ("", trimmed);
+    free(trimmed);
+
+    // Check multi space input
+    trimmed = trim("   ");
+    ASSERT_STREQ("", trimmed);
+    free(trimmed);
+
+    // Check both side trim input
+    trimmed = trim(" foo ");
+    ASSERT_STREQ("foo", trimmed);
+    free(trimmed);
+
+    // Check left side trim input
+    trimmed = trim("foo   ");
+    ASSERT_STREQ("foo", trimmed);
+    free(trimmed);
+
+    // Check right side trim input
+    trimmed = trim("   foo");
+    ASSERT_STREQ("foo", trimmed);
+    free(trimmed);
+
+    // Check no trim input
+    trimmed = trim("foo");
+    ASSERT_STREQ("foo", trimmed);
+    free(trimmed);
+  }
+}


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[45/50] [abbrv] hadoop git commit: HDFS-12287. Remove a no-longer applicable TODO comment in DatanodeManager. Contributed by Chen Liang.

Posted by wa...@apache.org.
HDFS-12287. Remove a no-longer applicable TODO comment in DatanodeManager. Contributed by Chen Liang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f13ca949
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f13ca949
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f13ca949

Branch: refs/heads/YARN-5881
Commit: f13ca94954072c9b898b142a5ff86f2c1f3ee55a
Parents: a32e013
Author: Yiqun Lin <yq...@apache.org>
Authored: Fri Aug 11 14:13:45 2017 +0800
Committer: Yiqun Lin <yq...@apache.org>
Committed: Fri Aug 11 14:13:45 2017 +0800

----------------------------------------------------------------------
 .../apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java | 2 --
 1 file changed, 2 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f13ca949/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
index d705fec..78783ca 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
@@ -212,8 +212,6 @@ public class DatanodeManager {
     this.namesystem = namesystem;
     this.blockManager = blockManager;
 
-    // TODO: Enables DFSNetworkTopology by default after more stress
-    // testings/validations.
     this.useDfsNetworkTopology = conf.getBoolean(
         DFSConfigKeys.DFS_USE_DFS_NETWORK_TOPOLOGY_KEY,
         DFSConfigKeys.DFS_USE_DFS_NETWORK_TOPOLOGY_DEFAULT);


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[05/50] [abbrv] hadoop git commit: HADOOP-14685. Exclude some test jars from hadoop-client-minicluster jar. Contributed by Bharat Viswanadham.

Posted by wa...@apache.org.
HADOOP-14685. Exclude some test jars from hadoop-client-minicluster jar. Contributed by Bharat Viswanadham.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/024c3ec4
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/024c3ec4
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/024c3ec4

Branch: refs/heads/YARN-5881
Commit: 024c3ec4a3ad47cf30501497c7ae810a30634f82
Parents: f44b349
Author: Arpit Agarwal <ar...@apache.org>
Authored: Fri Aug 4 16:46:59 2017 -0700
Committer: Arpit Agarwal <ar...@apache.org>
Committed: Fri Aug 4 16:46:59 2017 -0700

----------------------------------------------------------------------
 hadoop-client-modules/hadoop-client-minicluster/pom.xml | 7 +++++++
 1 file changed, 7 insertions(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/024c3ec4/hadoop-client-modules/hadoop-client-minicluster/pom.xml
----------------------------------------------------------------------
diff --git a/hadoop-client-modules/hadoop-client-minicluster/pom.xml b/hadoop-client-modules/hadoop-client-minicluster/pom.xml
index f4b2329..5255640 100644
--- a/hadoop-client-modules/hadoop-client-minicluster/pom.xml
+++ b/hadoop-client-modules/hadoop-client-minicluster/pom.xml
@@ -634,6 +634,13 @@
                         <exclude>**/*</exclude>
                       </excludes>
                     </filter>
+                    <filter>
+                      <artifact>org.apache.hadoop:hadoop-mapreduce-client-jobclient:*</artifact>
+                      <excludes>
+                        <exclude>testjar/*</exclude>
+                        <exclude>testshell/*</exclude>
+                      </excludes>
+                    </filter>
                   </filters>
                   <relocations>
                     <relocation>


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[12/50] [abbrv] hadoop git commit: HADOOP-14727. Socket not closed properly when reading Configurations with BlockReaderRemote. Contributed by Jonathan Eagles.

Posted by wa...@apache.org.
HADOOP-14727. Socket not closed properly when reading Configurations with BlockReaderRemote. Contributed by Jonathan Eagles.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a3a9c976
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a3a9c976
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a3a9c976

Branch: refs/heads/YARN-5881
Commit: a3a9c976c3cfa3ab6b0936eb8cf0889891bd0678
Parents: 0b67436
Author: Xiao Chen <xi...@apache.org>
Authored: Fri Aug 4 20:53:45 2017 -0700
Committer: Xiao Chen <xi...@apache.org>
Committed: Mon Aug 7 10:25:52 2017 -0700

----------------------------------------------------------------------
 .../java/org/apache/hadoop/conf/Configuration.java   | 15 ++++++++++-----
 .../org/apache/hadoop/conf/TestConfiguration.java    |  6 ++++--
 2 files changed, 14 insertions(+), 7 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a3a9c976/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
index e26d3a8..65e8569 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
@@ -18,6 +18,8 @@
 
 package org.apache.hadoop.conf;
 
+import com.ctc.wstx.io.StreamBootstrapper;
+import com.ctc.wstx.io.SystemId;
 import com.ctc.wstx.stax.WstxInputFactory;
 import com.fasterxml.jackson.core.JsonFactory;
 import com.fasterxml.jackson.core.JsonGenerator;
@@ -94,7 +96,6 @@ import org.apache.hadoop.security.alias.CredentialProviderFactory;
 import org.apache.hadoop.util.ReflectionUtils;
 import org.apache.hadoop.util.StringInterner;
 import org.apache.hadoop.util.StringUtils;
-import org.codehaus.stax2.XMLInputFactory2;
 import org.codehaus.stax2.XMLStreamReader2;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -285,7 +286,8 @@ public class Configuration implements Iterable<Map.Entry<String,String>>,
    * Specify exact input factory to avoid time finding correct one.
    * Factory is reusable across un-synchronized threads once initialized
    */
-  private static final XMLInputFactory2 XML_INPUT_FACTORY = new WstxInputFactory();
+  private static final WstxInputFactory XML_INPUT_FACTORY =
+      new WstxInputFactory();
 
   /**
    * Class to keep the information about the keys which replace the deprecated
@@ -2647,15 +2649,18 @@ public class Configuration implements Iterable<Map.Entry<String,String>>,
     return parse(connection.getInputStream(), url.toString());
   }
 
-  private XMLStreamReader parse(InputStream is,
-      String systemId) throws IOException, XMLStreamException {
+  private XMLStreamReader parse(InputStream is, String systemIdStr)
+      throws IOException, XMLStreamException {
     if (!quietmode) {
       LOG.debug("parsing input stream " + is);
     }
     if (is == null) {
       return null;
     }
-    return XML_INPUT_FACTORY.createXMLStreamReader(systemId, is);
+    SystemId systemId = SystemId.construct(systemIdStr);
+    return XML_INPUT_FACTORY.createSR(XML_INPUT_FACTORY.createPrivateConfig(),
+        systemId, StreamBootstrapper.getInstance(null, systemId, is), false,
+        true);
   }
 
   private void loadResources(Properties properties,

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a3a9c976/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfiguration.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfiguration.java b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfiguration.java
index 2af61c0..92d3290 100644
--- a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfiguration.java
+++ b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfiguration.java
@@ -155,11 +155,13 @@ public class TestConfiguration extends TestCase {
     startConfig();
     declareProperty("prop", "A", "A");
     endConfig();
-    
-    InputStream in1 = new ByteArrayInputStream(writer.toString().getBytes());
+
+    InputStream in1 = Mockito.spy(new ByteArrayInputStream(
+          writer.toString().getBytes()));
     Configuration conf = new Configuration(false);
     conf.addResource(in1);
     assertEquals("A", conf.get("prop"));
+    Mockito.verify(in1, Mockito.times(1)).close();
     InputStream in2 = new ByteArrayInputStream(writer.toString().getBytes());
     conf.addResource(in2);
     assertEquals("A", conf.get("prop"));


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[39/50] [abbrv] hadoop git commit: MAPREDUCE-6923. Optimize MapReduce Shuffle I/O for small partitions. Contributed by Robert Schmidtke.

Posted by wa...@apache.org.
MAPREDUCE-6923. Optimize MapReduce Shuffle I/O for small partitions. Contributed by Robert Schmidtke.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ac7d0604
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ac7d0604
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ac7d0604

Branch: refs/heads/YARN-5881
Commit: ac7d0604bc73c0925eff240ad9837e14719d57b7
Parents: b5c02f9
Author: Ravi Prakash <ra...@altiscale.com>
Authored: Wed Aug 9 15:39:52 2017 -0700
Committer: Ravi Prakash <ra...@altiscale.com>
Committed: Wed Aug 9 15:39:52 2017 -0700

----------------------------------------------------------------------
 .../main/java/org/apache/hadoop/mapred/FadvisedFileRegion.java  | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ac7d0604/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/FadvisedFileRegion.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/FadvisedFileRegion.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/FadvisedFileRegion.java
index cb9b5e0..79045f9 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/FadvisedFileRegion.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/FadvisedFileRegion.java
@@ -111,7 +111,10 @@ public class FadvisedFileRegion extends DefaultFileRegion {
     
     long trans = actualCount;
     int readSize;
-    ByteBuffer byteBuffer = ByteBuffer.allocate(this.shuffleBufferSize);
+    ByteBuffer byteBuffer = ByteBuffer.allocate(
+        Math.min(
+            this.shuffleBufferSize,
+            trans > Integer.MAX_VALUE ? Integer.MAX_VALUE : (int) trans));
     
     while(trans > 0L &&
         (readSize = fileChannel.read(byteBuffer, this.position+position)) > 0) {


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[16/50] [abbrv] hadoop git commit: YARN-6920. Fix resource leak that happens during container re-initialization. (asuresh)

Posted by wa...@apache.org.
YARN-6920. Fix resource leak that happens during container re-initialization. (asuresh)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/8d3fd819
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/8d3fd819
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/8d3fd819

Branch: refs/heads/YARN-5881
Commit: 8d3fd81980275fa81e7a5539b1751f38a63b6911
Parents: c61f2c4
Author: Arun Suresh <as...@apache.org>
Authored: Mon Aug 7 18:59:25 2017 -0700
Committer: Arun Suresh <as...@apache.org>
Committed: Mon Aug 7 18:59:25 2017 -0700

----------------------------------------------------------------------
 .../yarn/client/api/impl/TestNMClient.java      | 37 +++++++++-----------
 .../container/ContainerImpl.java                |  4 +++
 .../scheduler/ContainerScheduler.java           |  4 +++
 .../containermanager/TestContainerManager.java  |  9 +++++
 4 files changed, 34 insertions(+), 20 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/8d3fd819/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestNMClient.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestNMClient.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestNMClient.java
index 1034f7e..6bd0816 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestNMClient.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestNMClient.java
@@ -398,6 +398,8 @@ public class TestNMClient {
               "will be Rolled-back", Arrays.asList(new Integer[] {-1000}));
           testCommitContainer(container.getId(), true);
           testReInitializeContainer(container.getId(), clc, false);
+          testGetContainerStatus(container, i, ContainerState.RUNNING,
+              "will be Re-initialized", Arrays.asList(new Integer[] {-1000}));
           testCommitContainer(container.getId(), false);
         } else {
           testReInitializeContainer(container.getId(), clc, true);
@@ -449,24 +451,21 @@ public class TestNMClient {
       ContainerState state, String diagnostics, List<Integer> exitStatuses)
           throws YarnException, IOException {
     while (true) {
-      try {
-        ContainerStatus status = nmClient.getContainerStatus(
-            container.getId(), container.getNodeId());
-        // NodeManager may still need some time to get the stable
-        // container status
-        if (status.getState() == state) {
-          assertEquals(container.getId(), status.getContainerId());
-          assertTrue("" + index + ": " + status.getDiagnostics(),
-              status.getDiagnostics().contains(diagnostics));
-          
-          assertTrue("Exit Statuses are supposed to be in: " + exitStatuses +
-              ", but the actual exit status code is: " + status.getExitStatus(),
-              exitStatuses.contains(status.getExitStatus()));
-          break;
-        }
-        Thread.sleep(100);
-      } catch (InterruptedException e) {
-        e.printStackTrace();
+      sleep(250);
+      ContainerStatus status = nmClient.getContainerStatus(
+          container.getId(), container.getNodeId());
+      // NodeManager may still need some time to get the stable
+      // container status
+      if (status.getState() == state) {
+        assertEquals(container.getId(), status.getContainerId());
+        assertTrue("" + index + ": " + status.getDiagnostics(),
+            status.getDiagnostics().contains(diagnostics));
+
+        assertTrue("Exit Statuses are supposed to be in: " + exitStatuses +
+                ", but the actual exit status code is: " +
+                status.getExitStatus(),
+            exitStatuses.contains(status.getExitStatus()));
+        break;
       }
     }
   }
@@ -559,9 +558,7 @@ public class TestNMClient {
       ContainerLaunchContext clc, boolean autoCommit)
       throws YarnException, IOException {
     try {
-      sleep(250);
       nmClient.reInitializeContainer(containerId, clc, autoCommit);
-      sleep(250);
     } catch (YarnException e) {
       // NM container will only be in SCHEDULED state, so expect the increase
       // action to fail.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/8d3fd819/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ContainerImpl.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ContainerImpl.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ContainerImpl.java
index 46f8fa0..c0aa6b0 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ContainerImpl.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ContainerImpl.java
@@ -1397,6 +1397,10 @@ public class ContainerImpl implements Container {
       container.resourceSet =
           container.reInitContext.mergedResourceSet(container.resourceSet);
       container.isMarkeForKilling = false;
+      // Ensure Resources are decremented.
+      container.dispatcher.getEventHandler().handle(
+          new ContainerSchedulerEvent(container,
+          ContainerSchedulerEventType.CONTAINER_COMPLETED));
       container.sendScheduleEvent();
     }
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/8d3fd819/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/scheduler/ContainerScheduler.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/scheduler/ContainerScheduler.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/scheduler/ContainerScheduler.java
index c119bf2..60d6213 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/scheduler/ContainerScheduler.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/scheduler/ContainerScheduler.java
@@ -466,4 +466,8 @@ public class ContainerScheduler extends AbstractService implements
     return this.context.getContainerManager().getContainersMonitor();
   }
 
+  @VisibleForTesting
+  public ResourceUtilization getCurrentUtilization() {
+    return this.utilizationTracker.getCurrentUtilization();
+  }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/8d3fd819/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/TestContainerManager.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/TestContainerManager.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/TestContainerManager.java
index f2d2037..24d46b6 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/TestContainerManager.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/TestContainerManager.java
@@ -74,6 +74,7 @@ import org.apache.hadoop.yarn.api.records.LocalResource;
 import org.apache.hadoop.yarn.api.records.LocalResourceType;
 import org.apache.hadoop.yarn.api.records.LocalResourceVisibility;
 import org.apache.hadoop.yarn.api.records.Resource;
+import org.apache.hadoop.yarn.api.records.ResourceUtilization;
 import org.apache.hadoop.yarn.api.records.SerializedException;
 import org.apache.hadoop.yarn.api.records.SignalContainerCommand;
 import org.apache.hadoop.yarn.api.records.Token;
@@ -437,7 +438,15 @@ public class TestContainerManager extends BaseContainerManagerTest {
 
     File newStartFile = new File(tmpDir, "start_file_n.txt").getAbsoluteFile();
 
+    ResourceUtilization beforeUpgrade =
+        ResourceUtilization.newInstance(
+            containerManager.getContainerScheduler().getCurrentUtilization());
     prepareContainerUpgrade(autoCommit, false, false, cId, newStartFile);
+    ResourceUtilization afterUpgrade =
+        ResourceUtilization.newInstance(
+            containerManager.getContainerScheduler().getCurrentUtilization());
+    Assert.assertEquals("Possible resource leak detected !!",
+        beforeUpgrade, afterUpgrade);
 
     // Assert that the First process is not alive anymore
     Assert.assertFalse("Process is still alive!",


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[40/50] [abbrv] hadoop git commit: YARN-6631. Refactor loader.js in new Yarn UI. Contributed by Akhil P B.

Posted by wa...@apache.org.
YARN-6631. Refactor loader.js in new Yarn UI. Contributed by Akhil P B.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/8d953c23
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/8d953c23
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/8d953c23

Branch: refs/heads/YARN-5881
Commit: 8d953c2359c5b12cf5b1f3c14be3ff5bb74242d0
Parents: ac7d060
Author: Sunil G <su...@apache.org>
Authored: Thu Aug 10 11:53:26 2017 +0530
Committer: Sunil G <su...@apache.org>
Committed: Thu Aug 10 11:53:26 2017 +0530

----------------------------------------------------------------------
 .../src/main/webapp/app/initializers/loader.js  | 42 +++++++++-----------
 1 file changed, 19 insertions(+), 23 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/8d953c23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/initializers/loader.js
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/initializers/loader.js b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/initializers/loader.js
index aa8fb07..55f6e1b 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/initializers/loader.js
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/initializers/loader.js
@@ -20,25 +20,27 @@
 
 import Ember from 'ember';
 
-function getTimeLineURL() {
-  return '/conf?name=yarn.timeline-service.webapp.address';
+function getTimeLineURL(rmhost) {
+  var url = window.location.protocol + '//' +
+    (ENV.hosts.localBaseAddress? ENV.hosts.localBaseAddress + '/' : '') + rmhost;
+
+  url += '/conf?name=yarn.timeline-service.webapp.address';
+  Ember.Logger.log("Get Timeline Address URL: " + url);
+  return url;
 }
 
 function updateConfigs(application) {
   var hostname = window.location.hostname;
-  var rmhost = hostname +
-    (window.location.port ? ':' + window.location.port: '');
-
-  Ember.Logger.log("RM Address:" + rmhost);
+  var rmhost = hostname + (window.location.port ? ':' + window.location.port: '');
 
   if(!ENV.hosts.rmWebAddress) {
-    ENV = {
-       hosts: {
-          rmWebAddress: rmhost,
-        },
-    };
+    ENV.hosts.rmWebAddress = rmhost;
+  } else {
+    rmhost = ENV.hosts.rmWebAddress;
   }
 
+  Ember.Logger.log("RM Address: " + rmhost);
+
   if(!ENV.hosts.timelineWebAddress) {
     var timelinehost = "";
     $.ajax({
@@ -46,7 +48,7 @@ function updateConfigs(application) {
       dataType: 'json',
       async: true,
       context: this,
-      url: getTimeLineURL(),
+      url: getTimeLineURL(rmhost),
       success: function(data) {
         timelinehost = data.property.value;
         ENV.hosts.timelineWebAddress = timelinehost;
@@ -54,24 +56,18 @@ function updateConfigs(application) {
         var address = timelinehost.split(":")[0];
         var port = timelinehost.split(":")[1];
 
-        Ember.Logger.log("Timeline Address from RM:" + address + ":" + port);
+        Ember.Logger.log("Timeline Address from RM: " + timelinehost);
 
         if(address === "0.0.0.0" || address === "localhost") {
           var updatedAddress =  hostname + ":" + port;
-
-          /* Timeline v2 is not supporting CORS, so make as default*/
-          ENV = {
-             hosts: {
-                rmWebAddress: rmhost,
-                timelineWebAddress: updatedAddress,
-              },
-          };
-          Ember.Logger.log("Timeline Updated Address:" + updatedAddress);
+          ENV.hosts.timelineWebAddress = updatedAddress;
+          Ember.Logger.log("Timeline Updated Address: " + updatedAddress);
         }
         application.advanceReadiness();
-      },
+      }
     });
   } else {
+    Ember.Logger.log("Timeline Address: " + ENV.hosts.timelineWebAddress);
     application.advanceReadiness();
   }
 }


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[44/50] [abbrv] hadoop git commit: MAPREDUCE-6870. Add configuration for MR job to finish when all reducers are complete. (Peter Bacsko via Haibo Chen)

Posted by wa...@apache.org.
MAPREDUCE-6870. Add configuration for MR job to finish when all reducers are complete. (Peter Bacsko via Haibo Chen)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a32e0138
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a32e0138
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a32e0138

Branch: refs/heads/YARN-5881
Commit: a32e0138fb63c92902e6613001f38a87c8a41321
Parents: 312e57b
Author: Haibo Chen <ha...@apache.org>
Authored: Thu Aug 10 15:17:36 2017 -0700
Committer: Haibo Chen <ha...@apache.org>
Committed: Thu Aug 10 15:17:36 2017 -0700

----------------------------------------------------------------------
 .../mapreduce/v2/app/job/impl/JobImpl.java      |  35 ++++-
 .../mapreduce/v2/app/job/impl/TestJobImpl.java  | 139 +++++++++++++++----
 .../apache/hadoop/mapreduce/MRJobConfig.java    |   6 +-
 .../src/main/resources/mapred-default.xml       |   8 ++
 4 files changed, 160 insertions(+), 28 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a32e0138/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/JobImpl.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/JobImpl.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/JobImpl.java
index 4d155d0..6880b6c 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/JobImpl.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/JobImpl.java
@@ -644,6 +644,8 @@ public class JobImpl implements org.apache.hadoop.mapreduce.v2.app.job.Job,
   private float reduceProgress;
   private float cleanupProgress;
   private boolean isUber = false;
+  private boolean finishJobWhenReducersDone;
+  private boolean completingJob = false;
 
   private Credentials jobCredentials;
   private Token<JobTokenIdentifier> jobToken;
@@ -717,6 +719,9 @@ public class JobImpl implements org.apache.hadoop.mapreduce.v2.app.job.Job,
     this.maxFetchFailuresNotifications = conf.getInt(
         MRJobConfig.MAX_FETCH_FAILURES_NOTIFICATIONS,
         MRJobConfig.DEFAULT_MAX_FETCH_FAILURES_NOTIFICATIONS);
+    this.finishJobWhenReducersDone = conf.getBoolean(
+        MRJobConfig.FINISH_JOB_WHEN_REDUCERS_DONE,
+        MRJobConfig.DEFAULT_FINISH_JOB_WHEN_REDUCERS_DONE);
   }
 
   protected StateMachine<JobStateInternal, JobEventType, JobEvent> getStateMachine() {
@@ -2021,7 +2026,9 @@ public class JobImpl implements org.apache.hadoop.mapreduce.v2.app.job.Job,
                 TimeUnit.MILLISECONDS);
         return JobStateInternal.FAIL_WAIT;
       }
-      
+
+      checkReadyForCompletionWhenAllReducersDone(job);
+
       return job.checkReadyForCommit();
     }
 
@@ -2052,6 +2059,32 @@ public class JobImpl implements org.apache.hadoop.mapreduce.v2.app.job.Job,
       }
       job.metrics.killedTask(task);
     }
+
+   /** Improvement: if all reducers have finished, we check if we have
+       restarted mappers that are still running. This can happen in a
+       situation when a node becomes UNHEALTHY and mappers are rescheduled.
+       See MAPREDUCE-6870 for details */
+    private void checkReadyForCompletionWhenAllReducersDone(JobImpl job) {
+      if (job.finishJobWhenReducersDone) {
+        int totalReduces = job.getTotalReduces();
+        int completedReduces = job.getCompletedReduces();
+
+        if (totalReduces > 0 && totalReduces == completedReduces
+            && !job.completingJob) {
+
+          for (TaskId mapTaskId : job.mapTasks) {
+            MapTaskImpl task = (MapTaskImpl) job.tasks.get(mapTaskId);
+            if (!task.isFinished()) {
+              LOG.info("Killing map task " + task.getID());
+              job.eventHandler.handle(
+                  new TaskEvent(task.getID(), TaskEventType.T_KILL));
+            }
+          }
+
+          job.completingJob = true;
+        }
+      }
+    }
   }
 
   // Transition class for handling jobs with no tasks

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a32e0138/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestJobImpl.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestJobImpl.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestJobImpl.java
index 2147ec1..1827ce4 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestJobImpl.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestJobImpl.java
@@ -564,33 +564,13 @@ public class TestJobImpl {
     dispatcher.register(TaskAttemptEventType.class, taskAttemptEventHandler);
 
     // replace the tasks with spied versions to return the right attempts
-    Map<TaskId,Task> spiedTasks = new HashMap<TaskId,Task>();
-    List<NodeReport> nodeReports = new ArrayList<NodeReport>();
-    Map<NodeReport,TaskId> nodeReportsToTaskIds =
-        new HashMap<NodeReport,TaskId>();
-    for (Map.Entry<TaskId,Task> e: job.tasks.entrySet()) {
-      TaskId taskId = e.getKey();
-      Task task = e.getValue();
-      if (taskId.getTaskType() == TaskType.MAP) {
-        // add an attempt to the task to simulate nodes
-        NodeId nodeId = mock(NodeId.class);
-        TaskAttempt attempt = mock(TaskAttempt.class);
-        when(attempt.getNodeId()).thenReturn(nodeId);
-        TaskAttemptId attemptId = MRBuilderUtils.newTaskAttemptId(taskId, 0);
-        when(attempt.getID()).thenReturn(attemptId);
-        // create a spied task
-        Task spied = spy(task);
-        doReturn(attempt).when(spied).getAttempt(any(TaskAttemptId.class));
-        spiedTasks.put(taskId, spied);
+    Map<TaskId, Task> spiedTasks = new HashMap<>();
+    List<NodeReport> nodeReports = new ArrayList<>();
+    Map<NodeReport, TaskId> nodeReportsToTaskIds = new HashMap<>();
+
+    createSpiedMapTasks(nodeReportsToTaskIds, spiedTasks, job,
+        NodeState.UNHEALTHY, nodeReports);
 
-        // create a NodeReport based on the node id
-        NodeReport report = mock(NodeReport.class);
-        when(report.getNodeState()).thenReturn(NodeState.UNHEALTHY);
-        when(report.getNodeId()).thenReturn(nodeId);
-        nodeReports.add(report);
-        nodeReportsToTaskIds.put(report, taskId);
-      }
-    }
     // replace the tasks with the spied tasks
     job.tasks.putAll(spiedTasks);
 
@@ -641,6 +621,82 @@ public class TestJobImpl {
     commitHandler.stop();
   }
 
+  @Test
+  public void testJobNCompletedWhenAllReducersAreFinished()
+      throws Exception {
+    testJobCompletionWhenReducersAreFinished(true);
+  }
+
+  @Test
+  public void testJobNotCompletedWhenAllReducersAreFinished()
+      throws Exception {
+    testJobCompletionWhenReducersAreFinished(false);
+  }
+
+  private void testJobCompletionWhenReducersAreFinished(boolean killMappers)
+      throws InterruptedException, BrokenBarrierException {
+    Configuration conf = new Configuration();
+    conf.setBoolean(MRJobConfig.FINISH_JOB_WHEN_REDUCERS_DONE, killMappers);
+    conf.set(MRJobConfig.MR_AM_STAGING_DIR, stagingDir);
+    conf.setInt(MRJobConfig.NUM_REDUCES, 1);
+    DrainDispatcher dispatcher = new DrainDispatcher();
+    dispatcher.init(conf);
+    final List<TaskEvent> killedEvents =
+        Collections.synchronizedList(new ArrayList<TaskEvent>());
+    dispatcher.register(TaskEventType.class, new EventHandler<TaskEvent>() {
+      @Override
+      public void handle(TaskEvent event) {
+        if (event.getType() == TaskEventType.T_KILL) {
+          killedEvents.add(event);
+        }
+      }
+    });
+    dispatcher.start();
+    CyclicBarrier syncBarrier = new CyclicBarrier(2);
+    OutputCommitter committer = new TestingOutputCommitter(syncBarrier, true);
+    CommitterEventHandler commitHandler =
+        createCommitterEventHandler(dispatcher, committer);
+    commitHandler.init(conf);
+    commitHandler.start();
+
+    final JobImpl job = createRunningStubbedJob(conf, dispatcher, 2, null);
+
+    // replace the tasks with spied versions to return the right attempts
+    Map<TaskId, Task> spiedTasks = new HashMap<>();
+    List<NodeReport> nodeReports = new ArrayList<>();
+    Map<NodeReport, TaskId> nodeReportsToTaskIds = new HashMap<>();
+
+    createSpiedMapTasks(nodeReportsToTaskIds, spiedTasks, job,
+        NodeState.RUNNING, nodeReports);
+
+    // replace the tasks with the spied tasks
+    job.tasks.putAll(spiedTasks);
+
+    // finish reducer
+    for (TaskId taskId: job.tasks.keySet()) {
+      if (taskId.getTaskType() == TaskType.REDUCE) {
+        job.handle(new JobTaskEvent(taskId, TaskState.SUCCEEDED));
+      }
+    }
+
+    dispatcher.await();
+
+    /*
+     * StubbedJob cannot finish in this test - we'd have to generate the
+     * necessary events in this test manually, but that wouldn't add too
+     * much value. Instead, we validate the T_KILL events.
+     */
+    if (killMappers) {
+      Assert.assertEquals("Number of killed events", 2, killedEvents.size());
+      Assert.assertEquals("AttemptID", "task_1234567890000_0001_m_000000",
+          killedEvents.get(0).getTaskID().toString());
+      Assert.assertEquals("AttemptID", "task_1234567890000_0001_m_000001",
+          killedEvents.get(1).getTaskID().toString());
+    } else {
+      Assert.assertEquals("Number of killed events", 0, killedEvents.size());
+    }
+  }
+
   public static void main(String[] args) throws Exception {
     TestJobImpl t = new TestJobImpl();
     t.testJobNoTasks();
@@ -1021,6 +1077,37 @@ public class TestJobImpl {
     Assert.assertEquals(state, job.getInternalState());
   }
 
+  private void createSpiedMapTasks(Map<NodeReport, TaskId>
+      nodeReportsToTaskIds, Map<TaskId, Task> spiedTasks, JobImpl job,
+      NodeState nodeState, List<NodeReport> nodeReports) {
+    for (Map.Entry<TaskId, Task> e: job.tasks.entrySet()) {
+      TaskId taskId = e.getKey();
+      Task task = e.getValue();
+      if (taskId.getTaskType() == TaskType.MAP) {
+        // add an attempt to the task to simulate nodes
+        NodeId nodeId = mock(NodeId.class);
+        TaskAttempt attempt = mock(TaskAttempt.class);
+        when(attempt.getNodeId()).thenReturn(nodeId);
+        TaskAttemptId attemptId = MRBuilderUtils.newTaskAttemptId(taskId, 0);
+        when(attempt.getID()).thenReturn(attemptId);
+        // create a spied task
+        Task spied = spy(task);
+        Map<TaskAttemptId, TaskAttempt> attemptMap = new HashMap<>();
+        attemptMap.put(attemptId, attempt);
+        when(spied.getAttempts()).thenReturn(attemptMap);
+        doReturn(attempt).when(spied).getAttempt(any(TaskAttemptId.class));
+        spiedTasks.put(taskId, spied);
+
+        // create a NodeReport based on the node id
+        NodeReport report = mock(NodeReport.class);
+        when(report.getNodeState()).thenReturn(nodeState);
+        when(report.getNodeId()).thenReturn(nodeId);
+        nodeReports.add(report);
+        nodeReportsToTaskIds.put(report, taskId);
+      }
+    }
+  }
+
   private static class JobSubmittedEventHandler implements
       EventHandler<JobHistoryEvent> {
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a32e0138/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRJobConfig.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRJobConfig.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRJobConfig.java
index cfc1bcc..2023ba3 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRJobConfig.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRJobConfig.java
@@ -431,7 +431,7 @@ public interface MRJobConfig {
   public static final String JOB_ACL_MODIFY_JOB = "mapreduce.job.acl-modify-job";
 
   public static final String DEFAULT_JOB_ACL_MODIFY_JOB = " ";
-  
+
   public static final String JOB_RUNNING_MAP_LIMIT =
       "mapreduce.job.running.map.limit";
   public static final int DEFAULT_JOB_RUNNING_MAP_LIMIT = 0;
@@ -1033,4 +1033,8 @@ public interface MRJobConfig {
   String MR_JOB_REDACTED_PROPERTIES = "mapreduce.job.redacted-properties";
 
   String MR_JOB_SEND_TOKEN_CONF = "mapreduce.job.send-token-conf";
+
+  String FINISH_JOB_WHEN_REDUCERS_DONE =
+      "mapreduce.job.finish-when-all-reducers-done";
+  boolean DEFAULT_FINISH_JOB_WHEN_REDUCERS_DONE = true;
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a32e0138/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
index 101aa07..ee9b906 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
@@ -1126,6 +1126,14 @@
 </property>
 
 <property>
+  <name>mapreduce.job.finish-when-all-reducers-done</name>
+  <value>true</value>
+  <description>Specifies whether the job should complete once all reducers
+     have finished, regardless of whether there are still running mappers.
+  </description>
+</property>
+
+<property>
   <name>mapreduce.job.token.tracking.ids.enabled</name>
   <value>false</value>
   <description>Whether to write tracking ids of tokens to


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[48/50] [abbrv] hadoop git commit: YARN-6471. Support to add min/max resource configuration for a queue. (Sunil G via wangda)

Posted by wa...@apache.org.
http://git-wip-us.apache.org/repos/asf/hadoop/blob/95a81934/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java
index d45f756..a74274c 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java
@@ -191,6 +191,8 @@ public class TestLeafQueue {
             CapacitySchedulerConfiguration.ROOT, 
             queues, queues, 
             TestUtils.spyHook);
+    root.updateClusterResource(Resources.createResource(100 * 16 * GB, 100 * 32),
+        new ResourceLimits(Resources.createResource(100 * 16 * GB, 100 * 32)));
 
     ResourceUsage queueResUsage = root.getQueueResourceUsage();
     when(csContext.getClusterResourceUsage())
@@ -307,13 +309,11 @@ public class TestLeafQueue {
     // Verify the value for getAMResourceLimit for queues with < .1 maxcap
     Resource clusterResource = Resource.newInstance(50 * GB, 50);
 
-    a.updateClusterResource(clusterResource,
+    root.updateClusterResource(clusterResource,
         new ResourceLimits(clusterResource));
     assertEquals(Resource.newInstance(1 * GB, 1),
         a.calculateAndGetAMResourceLimit());
 
-    b.updateClusterResource(clusterResource,
-        new ResourceLimits(clusterResource));
     assertEquals(Resource.newInstance(5 * GB, 1),
         b.calculateAndGetAMResourceLimit());
   }
@@ -358,6 +358,8 @@ public class TestLeafQueue {
     Resource clusterResource = 
         Resources.createResource(numNodes * (8*GB), numNodes * 16);
     when(csContext.getNumClusterNodes()).thenReturn(numNodes);
+    root.updateClusterResource(clusterResource,
+        new ResourceLimits(clusterResource));
 
     // Setup resource-requests
     Priority priority = TestUtils.createMockPriority(1);
@@ -556,6 +558,8 @@ public class TestLeafQueue {
     Resource clusterResource = 
         Resources.createResource(numNodes * (8*GB), numNodes * 16);
     when(csContext.getNumClusterNodes()).thenReturn(numNodes);
+    root.updateClusterResource(clusterResource,
+        new ResourceLimits(clusterResource));
 
     // Setup resource-requests
     Priority priority = TestUtils.createMockPriority(1);
@@ -630,6 +634,8 @@ public class TestLeafQueue {
     // Test max-capacity
     // Now - no more allocs since we are at max-cap
     a.setMaxCapacity(0.5f);
+    root.updateClusterResource(clusterResource,
+        new ResourceLimits(clusterResource));
     applyCSAssignment(clusterResource,
         a.assignContainers(clusterResource, node_0,
         new ResourceLimits(clusterResource),
@@ -699,6 +705,8 @@ public class TestLeafQueue {
     Resource clusterResource =
         Resources.createResource(numNodes * (80 * GB), numNodes * 100);
     when(csContext.getNumClusterNodes()).thenReturn(numNodes);
+    root.updateClusterResource(clusterResource,
+        new ResourceLimits(clusterResource));
 
     // Set user-limit. Need a small queue within a large cluster.
     b.setUserLimit(50);
@@ -779,6 +787,8 @@ public class TestLeafQueue {
         Resources.createResource(numNodes * (8 * GB), numNodes * 100);
     when(csContext.getNumClusterNodes()).thenReturn(numNodes);
     when(csContext.getClusterResource()).thenReturn(clusterResource);
+    root.updateClusterResource(clusterResource,
+        new ResourceLimits(clusterResource));
 
     // Setup resource-requests so that one application is memory dominant
     // and other application is vcores dominant
@@ -891,6 +901,8 @@ public class TestLeafQueue {
     Resource clusterResource = 
         Resources.createResource(numNodes * (8*GB), numNodes * 16);
     when(csContext.getNumClusterNodes()).thenReturn(numNodes);
+    root.updateClusterResource(clusterResource,
+        new ResourceLimits(clusterResource));
  
     // Setup resource-requests
     Priority priority = TestUtils.createMockPriority(1);
@@ -915,6 +927,8 @@ public class TestLeafQueue {
     // Set user-limit
     a.setUserLimit(50);
     a.setUserLimitFactor(2);
+    root.updateClusterResource(clusterResource,
+        new ResourceLimits(clusterResource));
     
     // There're two active users
     assertEquals(2, a.getAbstractUsersManager().getNumActiveUsers());
@@ -940,7 +954,7 @@ public class TestLeafQueue {
     assertEquals(1*GB, app_1.getCurrentConsumption().getMemorySize());
 
     // Allocate one container to app_0, before allocating this container,
-    // user-limit = ceil((4 + 1) / 2) = 3G. app_0's used resource (3G) <=
+    // user-limit = floor((5 + 1) / 2) = 3G. app_0's used resource (3G) <=
     // user-limit.
     applyCSAssignment(clusterResource,
         a.assignContainers(clusterResource, node_1,
@@ -1068,15 +1082,9 @@ public class TestLeafQueue {
         a.assignContainers(clusterResource, node_1,
         new ResourceLimits(clusterResource),
         SchedulingMode.RESPECT_PARTITION_EXCLUSIVITY), a, nodes, apps);
-    assertEquals(9*GB, a.getUsedResources().getMemorySize());
-    assertEquals(8*GB, app_0.getCurrentConsumption().getMemorySize());
-    assertEquals(1*GB, app_1.getCurrentConsumption().getMemorySize());
-
-    assertEquals(4*GB,
-        app_0.getTotalPendingRequestsPerPartition().get("").getMemorySize());
-
-    assertEquals(1*GB,
-        app_1.getTotalPendingRequestsPerPartition().get("").getMemorySize());
+    assertEquals(12*GB, a.getUsedResources().getMemorySize());
+    assertEquals(12*GB, app_0.getCurrentConsumption().getMemorySize());
+    assertEquals(0*GB, app_1.getCurrentConsumption().getMemorySize());
   }
 
   @SuppressWarnings({ "unchecked", "rawtypes" })
@@ -1100,6 +1108,8 @@ public class TestLeafQueue {
     final int numNodes = 2;
     Resource clusterResource = Resources.createResource(numNodes * (8*GB), 1);
     when(csContext.getNumClusterNodes()).thenReturn(numNodes);
+    root.updateClusterResource(clusterResource,
+        new ResourceLimits(clusterResource));
 
     CapacitySchedulerQueueManager mockCapacitySchedulerQueueManager
         = mock(CapacitySchedulerQueueManager.class);
@@ -1122,6 +1132,8 @@ public class TestLeafQueue {
     qb.setUserLimit(100);
     qb.setUserLimitFactor(1);
 
+    root.updateClusterResource(clusterResource,
+        new ResourceLimits(clusterResource));
     final ApplicationAttemptId appAttemptId_0 =
               TestUtils.getMockApplicationAttemptId(0, 0);
     FiCaSchedulerApp app_0 =
@@ -1256,106 +1268,6 @@ public class TestLeafQueue {
   }
 
   @Test
-  public void testUserHeadroomMultiApp() throws Exception {
-    // Mock the queue
-    LeafQueue a = stubLeafQueue((LeafQueue)queues.get(A));
-    //unset maxCapacity
-    a.setMaxCapacity(1.0f);
-
-    // Users
-    final String user_0 = "user_0";
-    final String user_1 = "user_1";
-
-    // Submit applications
-    final ApplicationAttemptId appAttemptId_0 =
-        TestUtils.getMockApplicationAttemptId(0, 0);
-    FiCaSchedulerApp app_0 =
-        new FiCaSchedulerApp(appAttemptId_0, user_0, a,
-            a.getAbstractUsersManager(), spyRMContext);
-    a.submitApplicationAttempt(app_0, user_0);
-
-    final ApplicationAttemptId appAttemptId_1 =
-        TestUtils.getMockApplicationAttemptId(1, 0);
-    FiCaSchedulerApp app_1 =
-        new FiCaSchedulerApp(appAttemptId_1, user_0, a,
-            a.getAbstractUsersManager(), spyRMContext);
-    a.submitApplicationAttempt(app_1, user_0);  // same user
-
-    final ApplicationAttemptId appAttemptId_2 =
-        TestUtils.getMockApplicationAttemptId(2, 0);
-    FiCaSchedulerApp app_2 =
-        new FiCaSchedulerApp(appAttemptId_2, user_1, a,
-            a.getAbstractUsersManager(), spyRMContext);
-    a.submitApplicationAttempt(app_2, user_1);
-
-    // Setup some nodes
-    String host_0 = "127.0.0.1";
-    FiCaSchedulerNode node_0 = TestUtils.getMockNode(host_0, DEFAULT_RACK, 
-      0, 16*GB);
-    String host_1 = "127.0.0.2";
-    FiCaSchedulerNode node_1 = TestUtils.getMockNode(host_1, DEFAULT_RACK, 
-      0, 16*GB);
-
-    Map<ApplicationAttemptId, FiCaSchedulerApp> apps = ImmutableMap.of(
-        app_0.getApplicationAttemptId(), app_0, app_1.getApplicationAttemptId(),
-        app_1, app_2.getApplicationAttemptId(), app_2);
-    Map<NodeId, FiCaSchedulerNode> nodes = ImmutableMap.of(node_0.getNodeID(),
-        node_0, node_1.getNodeID(), node_1);
-
-    final int numNodes = 2;
-    Resource clusterResource = Resources.createResource(numNodes * (16*GB), 1);
-    when(csContext.getNumClusterNodes()).thenReturn(numNodes);
-
-    Priority priority = TestUtils.createMockPriority(1);
-
-    app_0.updateResourceRequests(Collections.singletonList(
-            TestUtils.createResourceRequest(ResourceRequest.ANY, 1*GB, 1, true,
-                priority, recordFactory)));
-
-    applyCSAssignment(clusterResource,
-        a.assignContainers(clusterResource, node_0,
-        new ResourceLimits(clusterResource),
-        SchedulingMode.RESPECT_PARTITION_EXCLUSIVITY), a, nodes, apps);
-    assertEquals(1*GB, a.getUsedResources().getMemorySize());
-    assertEquals(1*GB, app_0.getCurrentConsumption().getMemorySize());
-    assertEquals(0*GB, app_1.getCurrentConsumption().getMemorySize());
-    //Now, headroom is the same for all apps for a given user + queue combo
-    //and a change to any app's headroom is reflected for all the user's apps
-    //once those apps are active/have themselves calculated headroom for 
-    //allocation at least one time
-    assertEquals(2*GB, app_0.getHeadroom().getMemorySize());
-    assertEquals(0*GB, app_1.getHeadroom().getMemorySize());//not yet active
-    assertEquals(0*GB, app_2.getHeadroom().getMemorySize());//not yet active
-
-    app_1.updateResourceRequests(Collections.singletonList(
-        TestUtils.createResourceRequest(ResourceRequest.ANY, 1*GB, 2, true,
-            priority, recordFactory)));
-
-    applyCSAssignment(clusterResource,
-        a.assignContainers(clusterResource, node_0,
-        new ResourceLimits(clusterResource),
-        SchedulingMode.RESPECT_PARTITION_EXCLUSIVITY), a, nodes, apps);
-    assertEquals(2*GB, a.getUsedResources().getMemorySize());
-    assertEquals(1*GB, app_0.getCurrentConsumption().getMemorySize());
-    assertEquals(1*GB, app_1.getCurrentConsumption().getMemorySize());
-    assertEquals(1*GB, app_0.getHeadroom().getMemorySize());
-    assertEquals(1*GB, app_1.getHeadroom().getMemorySize());//now active
-    assertEquals(0*GB, app_2.getHeadroom().getMemorySize());//not yet active
-
-    //Complete container and verify that headroom is updated, for both apps 
-    //for the user
-    RMContainer rmContainer = app_0.getLiveContainers().iterator().next();
-    a.completedContainer(clusterResource, app_0, node_0, rmContainer,
-    ContainerStatus.newInstance(rmContainer.getContainerId(),
-	ContainerState.COMPLETE, "",
-	ContainerExitStatus.KILLED_BY_RESOURCEMANAGER),
-    RMContainerEventType.KILL, null, true);
-
-    assertEquals(2*GB, app_0.getHeadroom().getMemorySize());
-    assertEquals(2*GB, app_1.getHeadroom().getMemorySize());
-  }
-
-  @Test
   public void testHeadroomWithMaxCap() throws Exception {
     // Mock the queue
     LeafQueue a = stubLeafQueue((LeafQueue)queues.get(A));
@@ -1403,7 +1315,12 @@ public class TestLeafQueue {
     final int numNodes = 2;
     Resource clusterResource = Resources.createResource(numNodes * (8*GB), 1);
     when(csContext.getNumClusterNodes()).thenReturn(numNodes);
- 
+
+    ParentQueue root = (ParentQueue) queues
+        .get(CapacitySchedulerConfiguration.ROOT);
+    root.updateClusterResource(clusterResource,
+        new ResourceLimits(clusterResource));
+
     // Setup resource-requests
     Priority priority = TestUtils.createMockPriority(1);
     app_0.updateResourceRequests(Collections.singletonList(
@@ -1454,6 +1371,8 @@ public class TestLeafQueue {
     
     // Submit requests for app_1 and set max-cap
     a.setMaxCapacity(.1f);
+    root.updateClusterResource(clusterResource,
+        new ResourceLimits(clusterResource));
     app_2.updateResourceRequests(Collections.singletonList(
         TestUtils.createResourceRequest(ResourceRequest.ANY, 1*GB, 1, true,
             priority, recordFactory)));
@@ -1542,6 +1461,8 @@ public class TestLeafQueue {
         Resources.createResource(numNodes * (8*GB), numNodes * 16);
     when(csContext.getNumClusterNodes()).thenReturn(numNodes);
     when(csContext.getClusterResource()).thenReturn(clusterResource);
+    root.updateClusterResource(clusterResource,
+        new ResourceLimits(clusterResource));
 
     // Setup resource-requests
     Priority priority = TestUtils.createMockPriority(1);
@@ -1624,6 +1545,8 @@ public class TestLeafQueue {
     // Test max-capacity
     // Now - no more allocs since we are at max-cap
     a.setMaxCapacity(0.5f);
+    root.updateClusterResource(clusterResource,
+        new ResourceLimits(clusterResource));
     applyCSAssignment(clusterResource,
         a.assignContainers(clusterResource, node_0,
         new ResourceLimits(clusterResource),
@@ -1638,6 +1561,8 @@ public class TestLeafQueue {
     // Now, allocations should goto app_3 since it's under user-limit 
     a.setMaxCapacity(1.0f);
     a.setUserLimitFactor(1);
+    root.updateClusterResource(clusterResource,
+        new ResourceLimits(clusterResource));
     applyCSAssignment(clusterResource,
         a.assignContainers(clusterResource, node_0,
         new ResourceLimits(clusterResource),
@@ -1743,6 +1668,8 @@ public class TestLeafQueue {
     Resource clusterResource = 
         Resources.createResource(numNodes * (4*GB), numNodes * 16);
     when(csContext.getNumClusterNodes()).thenReturn(numNodes);
+    root.updateClusterResource(clusterResource,
+        new ResourceLimits(clusterResource));
     
     // Setup resource-requests
     Priority priority = TestUtils.createMockPriority(1);
@@ -1880,6 +1807,8 @@ public class TestLeafQueue {
     final int numNodes = 3;
     Resource clusterResource = 
         Resources.createResource(numNodes * (4*GB), numNodes * 16);
+    root.updateClusterResource(clusterResource,
+        new ResourceLimits(clusterResource));
     when(csContext.getNumClusterNodes()).thenReturn(numNodes);
     when(csContext.getMaximumResourceCapability()).thenReturn(
         Resources.createResource(4*GB, 16));
@@ -2051,6 +1980,8 @@ public class TestLeafQueue {
     Resource clusterResource = 
         Resources.createResource(numNodes * (8*GB), numNodes * 16);
     when(csContext.getNumClusterNodes()).thenReturn(numNodes);
+    root.updateClusterResource(clusterResource,
+        new ResourceLimits(clusterResource));
     
     // Setup resource-requests and submit
     Priority priority = TestUtils.createMockPriority(1);
@@ -2237,11 +2168,10 @@ public class TestLeafQueue {
     CSQueue newRoot = CapacitySchedulerQueueManager.parseQueue(csContext,
         csConf, null, CapacitySchedulerConfiguration.ROOT, newQueues, queues,
         TestUtils.spyHook);
-    queues = newQueues;
     root.reinitialize(newRoot, cs.getClusterResource());
 
     // Manipulate queue 'b'
-    LeafQueue a = stubLeafQueue((LeafQueue) queues.get(B));
+    LeafQueue a = stubLeafQueue((LeafQueue) newQueues.get(B));
 
     // Check locality parameters.
     assertEquals(2, a.getNodeLocalityDelay());
@@ -2277,6 +2207,8 @@ public class TestLeafQueue {
     Resource clusterResource =
         Resources.createResource(numNodes * (8 * GB), numNodes * 16);
     when(spyRMContext.getScheduler().getNumClusterNodes()).thenReturn(numNodes);
+    newRoot.updateClusterResource(clusterResource,
+        new ResourceLimits(clusterResource));
 
     // Setup resource-requests and submit
     Priority priority = TestUtils.createMockPriority(1);
@@ -2412,6 +2344,8 @@ public class TestLeafQueue {
     Resource clusterResource = 
         Resources.createResource(numNodes * (8*GB), 1);
     when(csContext.getNumClusterNodes()).thenReturn(numNodes);
+    root.updateClusterResource(clusterResource,
+        new ResourceLimits(clusterResource));
     
     // Setup resource-requests and submit
     List<ResourceRequest> app_0_requests_0 = new ArrayList<ResourceRequest>();
@@ -2545,6 +2479,8 @@ public class TestLeafQueue {
     Resource clusterResource = Resources.createResource(
         numNodes * (8*GB), numNodes * 16);
     when(csContext.getNumClusterNodes()).thenReturn(numNodes);
+    root.updateClusterResource(clusterResource,
+        new ResourceLimits(clusterResource));
 
     // Setup resource-requests and submit
     Priority priority = TestUtils.createMockPriority(1);
@@ -2660,17 +2596,14 @@ public class TestLeafQueue {
     assertEquals(2, e.getNumActiveApplications());
     assertEquals(1, e.getNumPendingApplications());
 
-    csConf.setDouble(CapacitySchedulerConfiguration
-        .MAXIMUM_APPLICATION_MASTERS_RESOURCE_PERCENT,
-        CapacitySchedulerConfiguration
-        .DEFAULT_MAXIMUM_APPLICATIONMASTERS_RESOURCE_PERCENT * 2);
+    csConf.setDouble(
+        CapacitySchedulerConfiguration.MAXIMUM_APPLICATION_MASTERS_RESOURCE_PERCENT,
+        CapacitySchedulerConfiguration.DEFAULT_MAXIMUM_APPLICATIONMASTERS_RESOURCE_PERCENT
+            * 2);
     Map<String, CSQueue> newQueues = new HashMap<String, CSQueue>();
-    CSQueue newRoot =
-        CapacitySchedulerQueueManager.parseQueue(csContext, csConf, null,
-            CapacitySchedulerConfiguration.ROOT,
-            newQueues, queues,
-            TestUtils.spyHook);
-    queues = newQueues;
+    CSQueue newRoot = CapacitySchedulerQueueManager.parseQueue(csContext,
+        csConf, null, CapacitySchedulerConfiguration.ROOT, newQueues, queues,
+        TestUtils.spyHook);
     root.reinitialize(newRoot, csContext.getClusterResource());
 
     // after reinitialization
@@ -2697,7 +2630,6 @@ public class TestLeafQueue {
             CapacitySchedulerConfiguration.ROOT,
             newQueues, queues,
             TestUtils.spyHook);
-    queues = newQueues;
     root.reinitialize(newRoot, cs.getClusterResource());
 
     // after reinitialization
@@ -2745,7 +2677,7 @@ public class TestLeafQueue {
     assertEquals(1, e.getNumPendingApplications());
 
     Resource clusterResource = Resources.createResource(200 * 16 * GB, 100 * 32); 
-    e.updateClusterResource(clusterResource,
+    root.updateClusterResource(clusterResource,
         new ResourceLimits(clusterResource));
 
     // after updating cluster resource
@@ -2837,6 +2769,9 @@ public class TestLeafQueue {
         numNodes * (8*GB), numNodes * 1);
     when(csContext.getNumClusterNodes()).thenReturn(numNodes);
 
+    root.updateClusterResource(clusterResource,
+        new ResourceLimits(clusterResource));
+
     // Setup resource-requests
     // resourceName: <priority, memory, #containers, relaxLocality>
     // host_0_0: < 1, 1GB, 1, true >
@@ -3036,36 +2971,44 @@ public class TestLeafQueue {
   @Test
   public void testMaxAMResourcePerQueuePercentAfterQueueRefresh()
       throws Exception {
+    Map<String, CSQueue> queues = new HashMap<String, CSQueue>();
     CapacitySchedulerConfiguration csConf = new CapacitySchedulerConfiguration();
-    Resource clusterResource = Resources
-        .createResource(100 * 16 * GB, 100 * 32);
+    final String newRootName = "root" + System.currentTimeMillis();
+    setupQueueConfiguration(csConf, newRootName);
+
+    Resource clusterResource = Resources.createResource(100 * 16 * GB,
+        100 * 32);
     CapacitySchedulerContext csContext = mockCSContext(csConf, clusterResource);
     when(csContext.getRMContext()).thenReturn(rmContext);
-    csConf.setFloat(CapacitySchedulerConfiguration.
-        MAXIMUM_APPLICATION_MASTERS_RESOURCE_PERCENT, 0.1f);
-    ParentQueue root = new ParentQueue(csContext, 
-        CapacitySchedulerConfiguration.ROOT, null, null);
-    csConf.setCapacity(CapacitySchedulerConfiguration.ROOT + "." + A, 80);
-    LeafQueue a = new LeafQueue(csContext, A, root, null);
-    assertEquals(0.1f, a.getMaxAMResourcePerQueuePercent(), 1e-3f);
-    assertEquals(a.calculateAndGetAMResourceLimit(),
-        Resources.createResource(160 * GB, 1));
-    
-    csConf.setFloat(CapacitySchedulerConfiguration.
-        MAXIMUM_APPLICATION_MASTERS_RESOURCE_PERCENT, 0.2f);
-    LeafQueue newA = new LeafQueue(csContext, A, root, null);
-    a.reinitialize(newA, clusterResource);
-    assertEquals(0.2f, a.getMaxAMResourcePerQueuePercent(), 1e-3f);
-    assertEquals(a.calculateAndGetAMResourceLimit(),
-        Resources.createResource(320 * GB, 1));
+    csConf.setFloat(
+        CapacitySchedulerConfiguration.MAXIMUM_APPLICATION_MASTERS_RESOURCE_PERCENT,
+        0.1f);
+
+    CSQueue root;
+    root = CapacitySchedulerQueueManager.parseQueue(csContext, csConf, null,
+        CapacitySchedulerConfiguration.ROOT, queues, queues, TestUtils.spyHook);
+    root.updateClusterResource(clusterResource,
+        new ResourceLimits(clusterResource));
 
-    Resource newClusterResource = Resources.createResource(100 * 20 * GB,
-        100 * 32);
-    a.updateClusterResource(newClusterResource, 
-        new ResourceLimits(newClusterResource));
-    //  100 * 20 * 0.2 = 400
-    assertEquals(a.calculateAndGetAMResourceLimit(),
-        Resources.createResource(400 * GB, 1));
+    // Manipulate queue 'a'
+    LeafQueue b = stubLeafQueue((LeafQueue) queues.get(B));
+    assertEquals(0.1f, b.getMaxAMResourcePerQueuePercent(), 1e-3f);
+    assertEquals(b.calculateAndGetAMResourceLimit(),
+        Resources.createResource(159 * GB, 1));
+
+    csConf.setFloat(
+        CapacitySchedulerConfiguration.MAXIMUM_APPLICATION_MASTERS_RESOURCE_PERCENT,
+        0.2f);
+    clusterResource = Resources.createResource(100 * 20 * GB, 100 * 32);
+    Map<String, CSQueue> newQueues = new HashMap<String, CSQueue>();
+    CSQueue newRoot = CapacitySchedulerQueueManager.parseQueue(csContext,
+        csConf, null, CapacitySchedulerConfiguration.ROOT, newQueues, queues,
+        TestUtils.spyHook);
+    root.reinitialize(newRoot, clusterResource);
+
+    b = stubLeafQueue((LeafQueue) newQueues.get(B));
+    assertEquals(b.calculateAndGetAMResourceLimit(),
+        Resources.createResource(320 * GB, 1));
   }
   
   @Test
@@ -3142,6 +3085,8 @@ public class TestLeafQueue {
     Resource clusterResource = Resources.createResource(numNodes * (16 * GB),
         numNodes * 16);
     when(csContext.getNumClusterNodes()).thenReturn(numNodes);
+    root.updateClusterResource(clusterResource,
+        new ResourceLimits(clusterResource));
 
     String user_0 = "user_0";
 
@@ -3308,6 +3253,8 @@ public class TestLeafQueue {
     Resource clusterResource = Resources.createResource(
         numNodes * (16*GB), numNodes * 16);
     when(csContext.getNumClusterNodes()).thenReturn(numNodes);
+    root.updateClusterResource(clusterResource,
+        new ResourceLimits(clusterResource));
 
     String user_0 = "user_0";
 
@@ -3435,6 +3382,8 @@ public class TestLeafQueue {
     Resource clusterResource = 
         Resources.createResource(numNodes * (8*GB), numNodes * 16);
     when(csContext.getNumClusterNodes()).thenReturn(numNodes);
+    root.updateClusterResource(clusterResource,
+        new ResourceLimits(clusterResource));
     
     // Setup resource-requests and submit
     // App0 has node local request for host_0/host_1, and app1 has node local
@@ -3533,6 +3482,8 @@ public class TestLeafQueue {
     Resource clusterResource =
         Resources.createResource(numNodes * (100*GB), numNodes * 128);
     when(csContext.getNumClusterNodes()).thenReturn(numNodes);
+    root.updateClusterResource(clusterResource,
+        new ResourceLimits(clusterResource));
 
     // Pending resource requests for app_0 and app_1 total 5GB.
     Priority priority = TestUtils.createMockPriority(1);
@@ -3699,6 +3650,8 @@ public class TestLeafQueue {
     Resource clusterResource =
         Resources.createResource(numNodes * (100*GB), numNodes * 128);
     when(csContext.getNumClusterNodes()).thenReturn(numNodes);
+    root.updateClusterResource(clusterResource,
+        new ResourceLimits(clusterResource));
 
     // Pending resource requests for user_0: app_0 and app_1 total 3GB.
     Priority priority = TestUtils.createMockPriority(1);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/95a81934/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestParentQueue.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestParentQueue.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestParentQueue.java
index cdbbc51..a2318f2 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestParentQueue.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestParentQueue.java
@@ -248,6 +248,8 @@ public class TestParentQueue {
         Resources.createResource(numNodes * (memoryPerNode*GB),
             numNodes * coresPerNode);
     when(csContext.getNumClusterNodes()).thenReturn(numNodes);
+    root.updateClusterResource(clusterResource,
+        new ResourceLimits(clusterResource));
 
     // Start testing
     LeafQueue a = (LeafQueue)queues.get(A);
@@ -486,6 +488,8 @@ public class TestParentQueue {
         Resources.createResource(numNodes * (memoryPerNode*GB), 
             numNodes * coresPerNode);
     when(csContext.getNumClusterNodes()).thenReturn(numNodes);
+    root.updateClusterResource(clusterResource,
+        new ResourceLimits(clusterResource));
 
     // Start testing
     CSQueue a = queues.get(A);
@@ -695,6 +699,8 @@ public class TestParentQueue {
         Resources.createResource(numNodes * (memoryPerNode*GB),
             numNodes * coresPerNode);
     when(csContext.getNumClusterNodes()).thenReturn(numNodes);
+    root.updateClusterResource(clusterResource,
+        new ResourceLimits(clusterResource));
 
     // Start testing
     LeafQueue a = (LeafQueue)queues.get(A);
@@ -771,6 +777,8 @@ public class TestParentQueue {
         Resources.createResource(numNodes * (memoryPerNode*GB),
             numNodes * coresPerNode);
     when(csContext.getNumClusterNodes()).thenReturn(numNodes);
+    root.updateClusterResource(clusterResource,
+        new ResourceLimits(clusterResource));
 
     // Start testing
     LeafQueue b3 = (LeafQueue)queues.get(B3);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/95a81934/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestReservations.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestReservations.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestReservations.java
index 5e6548b..b0f6c73 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestReservations.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestReservations.java
@@ -267,6 +267,8 @@ public class TestReservations {
     final int numNodes = 3;
     Resource clusterResource = Resources.createResource(numNodes * (8 * GB));
     when(csContext.getNumClusterNodes()).thenReturn(numNodes);
+    root.updateClusterResource(clusterResource,
+        new ResourceLimits(clusterResource));
 
     // Setup resource-requests
     Priority priorityAM = TestUtils.createMockPriority(1);
@@ -454,6 +456,8 @@ public class TestReservations {
     final int numNodes = 3;
     Resource clusterResource = Resources.createResource(numNodes * (8 * GB));
     when(csContext.getNumClusterNodes()).thenReturn(numNodes);
+    root.updateClusterResource(clusterResource,
+        new ResourceLimits(clusterResource));
 
     // Setup resource-requests
     Priority priorityAM = TestUtils.createMockPriority(1);
@@ -600,6 +604,8 @@ public class TestReservations {
     final int numNodes = 3;
     Resource clusterResource = Resources.createResource(numNodes * (8 * GB));
     when(csContext.getNumClusterNodes()).thenReturn(numNodes);
+    root.updateClusterResource(clusterResource,
+        new ResourceLimits(clusterResource));
 
     // Setup resource-requests
     Priority priorityAM = TestUtils.createMockPriority(1);
@@ -782,6 +788,8 @@ public class TestReservations {
     final int numNodes = 2;
     Resource clusterResource = Resources.createResource(numNodes * (8 * GB));
     when(csContext.getNumClusterNodes()).thenReturn(numNodes);
+    root.updateClusterResource(clusterResource,
+        new ResourceLimits(clusterResource));
 
     // Setup resource-requests
     Priority priorityAM = TestUtils.createMockPriority(1);
@@ -898,6 +906,8 @@ public class TestReservations {
         8 * GB);
     
     Resource clusterResource = Resources.createResource(2 * 8 * GB);
+    root.updateClusterResource(clusterResource,
+        new ResourceLimits(clusterResource));
 
     // Setup resource-requests
     Priority p = TestUtils.createMockPriority(5);
@@ -1072,6 +1082,8 @@ public class TestReservations {
     final int numNodes = 2;
     Resource clusterResource = Resources.createResource(numNodes * (8 * GB));
     when(csContext.getNumClusterNodes()).thenReturn(numNodes);
+    root.updateClusterResource(clusterResource,
+        new ResourceLimits(clusterResource));
 
     // Setup resource-requests
     Priority priorityAM = TestUtils.createMockPriority(1);
@@ -1260,6 +1272,8 @@ public class TestReservations {
     final int numNodes = 2;
     Resource clusterResource = Resources.createResource(numNodes * (8 * GB));
     when(csContext.getNumClusterNodes()).thenReturn(numNodes);
+    root.updateClusterResource(clusterResource,
+        new ResourceLimits(clusterResource));
 
     // Setup resource-requests
     Priority priorityAM = TestUtils.createMockPriority(1);
@@ -1422,6 +1436,9 @@ public class TestReservations {
     final int numNodes = 3;
     Resource clusterResource = Resources.createResource(numNodes * (8 * GB));
     when(csContext.getNumClusterNodes()).thenReturn(numNodes);
+    root.updateClusterResource(clusterResource,
+        new ResourceLimits(clusterResource));
+
 
     // Setup resource-requests
     Priority priorityAM = TestUtils.createMockPriority(1);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/95a81934/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/policy/TestPriorityUtilizationQueueOrderingPolicy.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/policy/TestPriorityUtilizationQueueOrderingPolicy.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/policy/TestPriorityUtilizationQueueOrderingPolicy.java
index e3c108a..b9d5b82 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/policy/TestPriorityUtilizationQueueOrderingPolicy.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/policy/TestPriorityUtilizationQueueOrderingPolicy.java
@@ -21,6 +21,7 @@ package org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.policy;
 import com.google.common.collect.ImmutableSet;
 import com.google.common.collect.ImmutableTable;
 import org.apache.hadoop.yarn.api.records.Priority;
+import org.apache.hadoop.yarn.server.resourcemanager.scheduler.QueueResourceQuotas;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CSQueue;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.QueueCapacities;
 import org.junit.Assert;
@@ -52,6 +53,8 @@ public class TestPriorityUtilizationQueueOrderingPolicy {
       when(q.getQueueCapacities()).thenReturn(qc);
       when(q.getPriority()).thenReturn(Priority.newInstance(priorities[i]));
 
+      QueueResourceQuotas qr = new QueueResourceQuotas();
+      when(q.getQueueResourceQuotas()).thenReturn(qr);
       list.add(q);
     }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/95a81934/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesCapacitySched.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesCapacitySched.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesCapacitySched.java
index 1108f1a..0132348 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesCapacitySched.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesCapacitySched.java
@@ -354,10 +354,10 @@ public class TestRMWebServicesCapacitySched extends JerseyTestBase {
   private void verifySubQueue(JSONObject info, String q, 
       float parentAbsCapacity, float parentAbsMaxCapacity)
       throws JSONException, Exception {
-    int numExpectedElements = 18;
+    int numExpectedElements = 20;
     boolean isParentQueue = true;
     if (!info.has("queues")) {
-      numExpectedElements = 31;
+      numExpectedElements = 33;
       isParentQueue = false;
     }
     assertEquals("incorrect number of elements", numExpectedElements, info.length());


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[41/50] [abbrv] hadoop git commit: HADOOP-14183. Remove service loader config file for wasb fs. Contributed by Esfandiar Manii.

Posted by wa...@apache.org.
HADOOP-14183. Remove service loader config file for wasb fs.
Contributed by Esfandiar Manii.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/54356b1e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/54356b1e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/54356b1e

Branch: refs/heads/YARN-5881
Commit: 54356b1e8366a23fff1bb45601efffc743306efc
Parents: 8d953c2
Author: Steve Loughran <st...@apache.org>
Authored: Thu Aug 10 16:46:33 2017 +0100
Committer: Steve Loughran <st...@apache.org>
Committed: Thu Aug 10 16:46:33 2017 +0100

----------------------------------------------------------------------
 .../src/main/resources/core-default.xml            | 12 ++++++++++++
 .../services/org.apache.hadoop.fs.FileSystem       | 17 -----------------
 2 files changed, 12 insertions(+), 17 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/54356b1e/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml b/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
index 593fd85..e6b6919 100644
--- a/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
+++ b/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
@@ -1322,6 +1322,18 @@
 
 <!-- Azure file system properties -->
 <property>
+  <name>fs.wasb.impl</name>
+  <value>org.apache.hadoop.fs.azure.NativeAzureFileSystem</value>
+  <description>The implementation class of the Native Azure Filesystem</description>
+</property>
+
+<property>
+  <name>fs.wasbs.impl</name>
+  <value>org.apache.hadoop.fs.azure.NativeAzureFileSystem$Secure</value>
+  <description>The implementation class of the Secure Native Azure Filesystem</description>
+</property>
+
+<property>
   <name>fs.azure.secure.mode</name>
   <value>false</value>
   <description>

http://git-wip-us.apache.org/repos/asf/hadoop/blob/54356b1e/hadoop-tools/hadoop-azure/src/main/resources/META-INF/services/org.apache.hadoop.fs.FileSystem
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-azure/src/main/resources/META-INF/services/org.apache.hadoop.fs.FileSystem b/hadoop-tools/hadoop-azure/src/main/resources/META-INF/services/org.apache.hadoop.fs.FileSystem
deleted file mode 100644
index 9f4922b..0000000
--- a/hadoop-tools/hadoop-azure/src/main/resources/META-INF/services/org.apache.hadoop.fs.FileSystem
+++ /dev/null
@@ -1,17 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-org.apache.hadoop.fs.azure.NativeAzureFileSystem
-org.apache.hadoop.fs.azure.NativeAzureFileSystem$Secure
\ No newline at end of file


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[24/50] [abbrv] hadoop git commit: HDFS-10326. Disable setting tcp socket send/receive buffers for write pipelines. Contributed by Daryn Sharp.

Posted by wa...@apache.org.
HDFS-10326. Disable setting tcp socket send/receive buffers for write pipelines. Contributed by Daryn Sharp.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/71b8dda4
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/71b8dda4
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/71b8dda4

Branch: refs/heads/YARN-5881
Commit: 71b8dda4f6ff6006410f3a9fe7717aa096004b1b
Parents: e0c2414
Author: Haohui Mai <wh...@apache.org>
Authored: Tue Aug 8 14:58:11 2017 -0700
Committer: Haohui Mai <wh...@apache.org>
Committed: Tue Aug 8 14:58:16 2017 -0700

----------------------------------------------------------------------
 .../hadoop/hdfs/protocol/HdfsConstants.java     |  4 ++--
 .../src/main/resources/hdfs-default.xml         |  9 ++++++---
 .../hadoop/hdfs/TestDFSClientSocketSize.java    | 20 ++++++++++++--------
 3 files changed, 20 insertions(+), 13 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/71b8dda4/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
index b636121..2681f12 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
@@ -48,8 +48,8 @@ public final class HdfsConstants {
   public static final byte COLD_STORAGE_POLICY_ID = 2;
   public static final String COLD_STORAGE_POLICY_NAME = "COLD";
 
-  // TODO should be conf injected?
-  public static final int DEFAULT_DATA_SOCKET_SIZE = 128 * 1024;
+  public static final int DEFAULT_DATA_SOCKET_SIZE = 0;
+
   /**
    * A special path component contained in the path for a snapshot file/dir
    */

http://git-wip-us.apache.org/repos/asf/hadoop/blob/71b8dda4/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml b/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
index 8bf2b8c..bb62359 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
@@ -2545,13 +2545,14 @@
 
 <property>
   <name>dfs.client.socket.send.buffer.size</name>
-  <value>131072</value>
+  <value>0</value>
   <description>
     Socket send buffer size for a write pipeline in DFSClient side.
     This may affect TCP connection throughput.
     If it is set to zero or negative value,
     no buffer size will be set explicitly,
     thus enable tcp auto-tuning on some system.
+    The default value is 0.
   </description>
 </property>
 
@@ -3025,23 +3026,25 @@
 
 <property>
   <name>dfs.datanode.transfer.socket.send.buffer.size</name>
-  <value>131072</value>
+  <value>0</value>
   <description>
     Socket send buffer size for DataXceiver (mirroring packets to downstream
     in pipeline). This may affect TCP connection throughput.
     If it is set to zero or negative value, no buffer size will be set
     explicitly, thus enable tcp auto-tuning on some system.
+    The default value is 0.
   </description>
 </property>
 
 <property>
   <name>dfs.datanode.transfer.socket.recv.buffer.size</name>
-  <value>131072</value>
+  <value>0</value>
   <description>
     Socket receive buffer size for DataXceiver (receiving packets from client
     during block writing). This may affect TCP connection throughput.
     If it is set to zero or negative value, no buffer size will be set
     explicitly, thus enable tcp auto-tuning on some system.
+    The default value is 0.
   </description>
 </property>
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/71b8dda4/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientSocketSize.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientSocketSize.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientSocketSize.java
index fa12f34..40cd676 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientSocketSize.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientSocketSize.java
@@ -30,7 +30,6 @@ import org.slf4j.LoggerFactory;
 import java.io.IOException;
 import java.net.Socket;
 
-import static org.apache.hadoop.hdfs.client.HdfsClientConfigKeys.DFS_CLIENT_SOCKET_SEND_BUFFER_SIZE_DEFAULT;
 import static org.apache.hadoop.hdfs.client.HdfsClientConfigKeys.DFS_CLIENT_SOCKET_SEND_BUFFER_SIZE_KEY;
 import static org.junit.Assert.assertTrue;
 
@@ -42,15 +41,16 @@ public class TestDFSClientSocketSize {
   }
 
   /**
-   * The setting of socket send buffer size in
-   * {@link java.net.Socket#setSendBufferSize(int)} is only a hint.  Actual
-   * value may differ.  We just sanity check that it is somewhere close.
+   * Test that the send buffer size default value is 0, in which case the socket
+   * will use a TCP auto-tuned value.
    */
   @Test
   public void testDefaultSendBufferSize() throws IOException {
-    assertTrue("Send buffer size should be somewhere near default.",
-        getSendBufferSize(new Configuration()) >=
-            DFS_CLIENT_SOCKET_SEND_BUFFER_SIZE_DEFAULT / 2);
+    final int sendBufferSize = getSendBufferSize(new Configuration());
+    LOG.info("If not specified, the auto tuned send buffer size is: {}",
+        sendBufferSize);
+    assertTrue("Send buffer size should be non-negative value which is " +
+        "determined by system (kernel).", sendBufferSize > 0);
   }
 
   /**
@@ -73,6 +73,10 @@ public class TestDFSClientSocketSize {
         sendBufferSize1 > sendBufferSize2);
   }
 
+  /**
+   * Test that if the send buffer size is 0, the socket will use a TCP
+   * auto-tuned value.
+   */
   @Test
   public void testAutoTuningSendBufferSize() throws IOException {
     final Configuration conf = new Configuration();
@@ -80,7 +84,7 @@ public class TestDFSClientSocketSize {
     final int sendBufferSize = getSendBufferSize(conf);
     LOG.info("The auto tuned send buffer size is: {}", sendBufferSize);
     assertTrue("Send buffer size should be non-negative value which is " +
-          "determined by system (kernel).", sendBufferSize > 0);
+        "determined by system (kernel).", sendBufferSize > 0);
   }
 
   private int getSendBufferSize(Configuration conf) throws IOException {


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org